Jumping On The AI Bandwagon

by | Nov 5, 2025 | General | 0 comments

I’ve long held an interest in creating video. I’ve had a Youtube channel for years, but focused mainly on photography and then later added painting and woodworking contents. I’ve previously included promo videos for my novel.

And then came AI. At first I was skeptical. I knew AI was getting good, but I thought that good also meant expensive and would be way above what I was willing to spend. Turns out, that really isn’t the case. There are several subscription models out there that give a fair amount of generation capibility for a fairly reasonable price.

Learning to use the capability of those subscriptions takes some time and a certain amount of trial and error. AI video generators are by default creative. They take some vague input and produce 5-10 second clips of video. The garbage in garbage out rule very much applies. Learning what is garbage in, is far tricker.

I’ve been working with AI music and video generation for about a month now. I first subsribed to OpenArt which gets pretty good reviews on YouTube. My only real gripe with OpenArt is that to get their best prices, you have to subsribe for a year. That would be fine except that even if you cancel, there are no refunds after a certain fairly low level of generation. I then gave LTX Studio a try but with LTX i decided to not subscibe a year. I glad I didn’t. I went through my first monthly alotment of credits on OpenArt and LTX really quickly. Granted I barely had a clue what I was doing. Credits are the equivalent of dollars or whatever currency you are using. LTX is geared more towards storyboarding and creating videos from the storyboard. As it understand it, that is pretty much what filmmakers do. My gripe with LTX is that their consistent characters are generated within LTX. I gave the software around 30 images of a character from all angles and the finished character only had the vaguest resemblance to the input images. For me that was very disappointing. OpenArt didn’t fair much better. A little, but not much.

One of the upsides of OpenArt is that additional credits can be purchased. As of today 5000 credits cost $15, U.S., dollars that is. Depending upon model, a 10 second video will run 200 credits and up. Veo 3 model is more expensive because it is newer and has lip sync capabilities. LTX as far as I could ascertain, does not offer a means of purchasing additional credits. I’ll also add that as best I could figure out,to purchase additonal credits on OpenArt one needs to subscribe to buying blocks of 5000 credits per month. That type of subscription can be cancelled immediately with no penalties. Which I did several times.

Speaking of lip sync, there are currently two holy grails in AI video generation, consistent characters and lip sync. Neither in my opionion are close to perfect. Consistent characters is high on everyone’s priority because AI video clips really are just clips. Five to ten seconds in length is pretty standard. Some of the lip sync models will allow much longer clips up to around 30 seconds. One quickly gets used to planning exactly what one wishes to happen in those 5-10 seconds. If one is planning out say 5 minute video, centered around a single character, then getting that character exactly the same in every clip becomes pretty desirable. Both OpenArt and LTX offer a consistent character generator. Both are a bit off I think. I had much better luck using the image to video option and created a base image from which a video was produced. Thinking of it as creating frame one of video as a still that the AI engine picks up and runs with. Lip syncing being the second holy grail, is getting better, but still a long way from perfect in my opinion.

My first real AI video was a concept of the one year in the life or life surrounding a single oak tree in a meadow. I started with spring, then summer, fall and finally winter. I had a doe and fawn entering the meadow, a rainstorm, a rabbit hopping through and a fox wandering through. Summer saw the grass browning and then fall introduced falling leaves and leaves changing color. Fall also saw a couple of bucks fighting over a small herd of does. Winter had a blustery storm and falling snow. Final spring comes around again, and the cycle renews. Nothing terribly complicated but I still had to generate some videos multiple times.

Prompting, the process of writing in plain language what you want to happen is a process in itself. One thinks they have the perfect prompt written only to discover that the AI engine did not interprete what you want with what you wrote. I’ll admit, I got better with prompting and had fewer regenerations as I went along. But weird stuff does happen and prompts often need rephrasing to finally get what is desired.

After my success, meaning I got something I liked, with Seasons I tried a music video. The state of AI is pretty amazing in the music generation department. I have a subsription to Suno AI which with a very simple prompt gave me an amazing song, fully instrumented and sung. Suno also allows to upload your own lyrics and it will create music to go with it. My first song, “Fire and Rainbows” really, really exceeded my expectations.

OpenArt as of this writing is running a contest for music videos. The contest ends towards the end of November. For around 5000 credits you can upload a song, pick or create your own character and have the software create a music video. Using Midjourney, which is an AI image generating engine, I created my singer, loaded his image into the software, uploaded my song from Suno and hit go. The song was about 4 minutes long, so that took a bit to generate. The results while interesting I don’t thing was ready for prime time. In the final video I created I think I used less than 45 seconds of the prepared video from OpenArt.

My second video I took even farther. Using some additional AI software called Audimee AI, I created a voice for my singer by uploading a sample and having the software model that voice. Getting a consistent voice with Suno isn’t currently possible. I think you can get close, but not quite fully there. Hence my modeling a voice with Audimee AI. After the song is complete in Suno, I downloaded the stems, the individual tracks for instuments and vocals. I then ran the isolated vocal track through Audimee and had my modeled voice re-sing the track. I put all the tracks back together in Logic Pro and had a finished song with the same voice as my previous video. Doing it this way has allowed me create a “singer” who I will use for all my vocal tracks. In essence I’ve created a AI performer. I’m not sure about the wide appeal of a rock song with jazz and blues influences, but I’m pretty stoked

To see these and other videos I have on YouTube go to: https://www.youtube.com/@timothystringer

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *