As previously published in edited form at the Free Lance-Star (March 5, 2023) Click here.
Does art imitate life? When you look at the impact of science fiction in the real world that are many examples of make-believe that have come true. Space travel is pretty common again after being in fiction form. Then you have cell phones, video chats, drones, and robots. All had some root in the imagination before becoming reality.
But nothing has accelerated in the public mind like artificial intelligence (A.I.) has in the past 3-4 months. No, we aren’t at autonomous robots like in the Terminator movies, yet. But after the public release of ChatGPT at the end of last year it feels like we are a step closer. ChatGPT is a conversational A.I. chatbot tool that can answer your questions in natural, fluent, human language.
Regardless of what first pops into your head, most of you probably imagine a world where a machine thinks and acts like a human autonomously. In reality, A.I. is a computing device that can mimic human thought and decision-making ability. The device can perceive its own environment and act on its own to reach the goal.
The field of A.I. has been around since 1956. The field was built upon the idea that human intelligence “can be so precisely described that a machine can be made to simulate it.” This taken from a Dartmouth College research project at the time. There is an extensive list of A.I. research that includes: reasoning, knowledge, planning, learning, and perception. Using approaches and tools like statistics, probability, economics, mathematics, psychology, and neuroscience.
A.I. has been behind the scenes for years now. Hidden within tools you use every day that you probably don’t even think about. The YouTube algorithm recommends the next videos for you to watch based on previous history, the types of Facebook ads you see because of items you’ve searched, Amazon sends you a notice that you are probably close to running out of certain household supplies because of the frequency you have purchased those items in the past.
Self-driving vehicles all function on A.I. tools. The cars are trained on the rules of the road, understanding stop lights, balls bouncing in the street, pedestrians, and other traffic. All so the vehicle can make real-time decisions, faster and more accurately than a human could.
The algorithms that have been used for years now have been shrouded in mystery. No one in the public really knows how they function under the hood and still don’t. This means that A.I. tools have been limited to software engineers, but that has all changed.
There are many advanced A.I. tools that have been released in the past 6 months that are revolutionary even if they are not always correct or perfect. But only short-sighted individuals will think things like accuracy won’t improve rapidly.
ChatGPT is a natural language processor, Midjourney can generate art just by typing what you want, there are voice imitations and even full video generation a.i.
But what can we expect to change in our day-to-day lives? I’ve been using three tools (ChatGPT, Midjourney, and Stable Diffusion) extensively over the past several months to try and find out.
ChatGPT is limited to information in 2021 and earlier. You can type and hold a conversation with the bot no different than you would with a person. It will remember what you tell it. Ask it to give you 10 recipes for chicken. It will give you 10. Then you can just type “give me 10 more”. You can tell ChatGPT what ingredients you have in your house and what recipes you could make from those ingredients. ChatGPT can write full blogs, essays, tweets, and LinkedIn updates.
Midjourney is just slightly more complicated to use than ChatGPT. It uses Discord, (a voice, text, video, chat app) and a series of easy commands to generate A.I. art. Midjourney can create cartoon characters, landscapes, and website design layouts. You’re limited only by your imagination and using the right types of prompts. I used Midjourney to create a person being buried by technology for use on the homepage of my website.
Stable Diffusion natively is the most complicated to use. There was some integration with other free tools and some code to modify, but I was able to upload 20 pictures of my face to “learn me”. I can now prompt the A.I. tool to produce pictures of me in various ways. The picture accompanying this article has the prompts, Pixar character, regal portrait, and Kratos from the God of War video game. A lot of the pictures were junk, but these came out pretty slick. These tools still have issues with human fingers and hilariously create double-headed images.
These tools are all free to try.
There are some ethical debates to be worked out with A.I. tools. A.I. output is generated from scraping the internet and reading the original work of other authors and artists. Does that count as plagiarism? Does that mean that the work that A.I. produces is even original?
What about personal biases that intentionally or unintentionally get baked into the algorithm? This leads to the possibility of output that marginalizes whole groups of people. There have been many issues with facial recognition having a harder time with darker skin people.
Students have already been using ChatGPT to write essays and submit schoolwork. Amazon has had an influx of low-quality A.I. written eBooks to get self-published. These are people looking for the easy way out or the quick buck.
The one thing I can say for certain is that A.I. tools are here to stay. This genie isn’t going back into the bottle. We must learn how to effectively and ethically use these tools. A.I. will continue to get refined, more accurate, easier to use, and embedded into more devices and software.
Have you tried any of them yet? Hit me up and let me know.
John Barker, vCIO
MBA | CISSP | PMP