Happy New AI-r?

in response to a prompt I gave to a GenAI image editor

I started the year sending out a graphic sourced from one of the AI design generators and I signed off as J.AI

Some of my friends caught onto the joke and reminded me of the time I signed off as iJay the year Apple launched its iPhone. Others reminded me how to correctly spell my name.

So, I decided to address the other elephant in the room, after the butterfly effects of geo-politics. ( Read this series of essays I anchored at the end of last year).

And that is AI- Artificial Intelligence.

Generative AI (the stuff that comes out of OpenAI‘s ChatGPT, Microsoft’s Copilot, or Bard from Google or even Llama 2 from Meta and Grok from X) have captured the chatter within the white-collar world, but it is just a fraction of what’s brewing and cooking.

While text, images and videos generated by LLMs (Large language models) are upending creative fields across all types of media- movies, podcasts, advertising jingles, websites, novels, corporate collaterals; it is also reframing customer services with chatbots, voice and video enabled. No surprise here that screen writers and screen actors have been striking work, and large-scale layoffs are taking place in the backend and front-end process functions especially in traditional IT and tech enabled services sectors.

Software code that could take a team of coders months to put together is now being generated in a matter of seconds by AI. Making sure that it actually works and delivers the outcome expected is another matter, though.

But the leap of faith has been taken. Companies, startups, professionals are experimenting enmasse to find out use cases for their own use, or just for fun!

This year, however, is going to be different. Maybe the trough of disappointment that follows the euphoria of early adoption combined with the plateau of higher performance. That’s what investors who are investing their billions are betting on.

So here are my four imperatives for 2024.

AI Regulation: I had a chilling conversation with a tech governance expert who shared that tech gurus, corporate captains, and head honchos, are pushing their governments across the world for regulations. This is a first. Ever since the internet came into public domain emerging out of the US defense network (Look up DARPA, if you are interested), tech has always stayed ahead of regulation. Remember the time lag between a ride share app starting service and finally becoming regulated as a taxi hailing aggregator? Or the case of data privacy, when GDPR guidelines came into force almost a decade and many elections after populations had been subjected to targeted advertising during election time based on algorithms developed around people’s behavior.

With generative AI delivering deep fakes, flawlessly, along with ubiquitous uninterrupted social media amplification, the potential of AI to skew outcomes in elections across major democracies is no longer science fiction. Some believe that stranger outcomes may be possible with social media platform algorithms being hacked or compromised to create a stream of AI-generated content that can be customized and personalized to get a desired action from a group or even a person.

The core issue is the opacity of AI processing. No one, not even the designers who added weights and values to the core data .csv file know how the machine learns to generate the output it does after it goes into the neural network of the world wide web. This is also one reason why safeguards are necessary as many experiments have shown that left to itself, the self-generated code can potentially suggest biases of all kinds and even share details of chemicals and processes for making explosives to a kid sharing a “prompt” on wanting to make fireworks to celebrate her grandmother’s birthday.

The rules are coming. If only a basic one that ensures an extended sandbox phase for testing the output of an AI program. But whether that will be too little and too easy to bypass is something governments across the world are taking different views on. As history shows, the countries that built autobahns and freeways ended up driving the automobile revolution as opposed to the ones which put speed limits and/ or had men with flags walking in front of cars with internal combustion engines.

AI Compute: this one is for techies, and also for those who have nothing to do with it. Creating those fun videos and outlandish images need a lot of computing resources, energy, and cooling. Read, chips, electricity, and heat exchangers. That means massive data centers which house those super set of chips on circuit boards. It also means power generation and supply as well as cooling towers, air-conditioning, et al. And while the race for quantum computing is on, that doesn’t mean its only a matter of who’s got access to the biggest resource pool to pump in the investments needed. I remember back in the 80’s when a huge amount of legislative and policy back and forth was happening to import India’s first supercomputer, and then a bunch of engineers wired together something that performed as well simply by hooking together an array of commercial grade computers with innovative software.

The flip side, of course, this time around is how to beat the energy to performance ratios when the computing processes at work inside the box may be incomputable. Maybe GenAI should also have power units used indicator like Google’s search engine which presents data like how many results scanned in how many seconds. After all, energy generation has a cost and an impact in greenhouse gases.

The challenge is fundamental. A number of techies assume that AI is just a layer on top of existing digital infrastructure, the proverbial MALAI as in cream (but used by Indian techies as an acronym for Machine Learning and Artificial Intelligence) on top the milk. Designers on the other hand, are at work building models that upend that centralized model using distributed architecture. I remember a lot of techies in the early 2000’s downloading SETI programs, that would use the spare capacity of a laptop to run signal data searching for extra-terrestrial intelligence. Could this be a viable model for building AI? And that goes back to security and sandbox regulations. How would distributed architecture designs be kept secure from hackers and false attributes being inserted into the machine learning systems that power AI.

AI Use Cases:

At this point of time, most people who have used Generative AI to design efficient ways of connecting with customers, most optimal ways to organize logistics systems, (and even design greeting cards and New Year wishes like I did), are all looking at AI as a tool. Almost like another add on functionality of the web browser to find answers, or better ways to organize data and represent outputs. But it is early days yet. Think how mobiles and computing came together. If you were involved in tech in the late 90s or early 2000s, you had a brick sized handphone with battery life to support a few calls, you had a pocket digital organizer or PDA (personal data assistant not public display of affection) and a cable to sync your data with your desktop or laptop. Over the next twenty years, not only did all your handheld devices converge into one smartphone, but the number of functionalities including monitoring your health, air quality and weather based on location is simply a swipe away on your mobile screen.

We are going through a similar evolution in AI. As each launch of the next version of AI is revealing, more and more functionality is getting baked into core models like Google’s Gemini or OpenAI’s GPT4 or Meta’s Llama, and its happening at warp speed.

What took decades is now unfolding in months. Only to be discarded as the next new AI version is unveiled which is even more intuitive, more pervasive, and more targeted to your personal or organizational needs.

So, what should you do?

Be that kid in the sandbox! Provided the sandbox is put in place quickly and in easy reach within the park of your imagination. Connect with the child in you and allow your fun side to come into how you work, play and live. And just as you test your “believe but verify” instincts when you come across a too-good-to-be-true “deep fake”, make sure to call it out in time and not spread misinformation or be prey to disinformation. But then you already know that. After all, you have avoided potential insanity from the dopamine rush of mindlessly consuming social media content, and created your own frameworks on what to learn, where to earn from and where to spend your time and money.

My hope for 2024 is more AI use cases will be developed to combat bigger challenges like extreme weather and deliver simpler solutions for cleaner air, better water and productive land use for sustainable lifestyles. Meanwhile I will keep experimenting with more stuff I can keep signing off with j.AI.        

This entry was posted in content, digital, Jay Vikram Bakshi, politics and tagged , , , , , , , , , , , , , , , . Bookmark the permalink.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.