top of page

Living with Machines: The Next Stage of Human Evolution




Everyone is currently trying to decipher just how the current wave of technology will affect us, especially in the area of jobs. Still it should not be limited to one specific area of employment. As we have seen in the rise of social media from Facebook to TikTok and the like, the ability to connect with people grew tremendously. Now we can find our long lost friends, and connect with jilted family that one aspect overshadowed us for a moment from seeing the dangerous chimera which lied beneath – a slew of mental health problems, everyone suddenly ‘teaching’ you something, and the constant barrage of quotes about life and how we are all suffering from some existential crisis.

The underestimation of the effects of social media is very much present by which its creators lacked the foresight to control the spiral of negativity it unleashed onto the world. Now social media creators have provided a medium for us to express the already present darkness that lay dormant only to appear simply as fantasies or only in our dreams have now taken flight and become real.

The question lies are we sensitized enough to avoid such a replay?

Now we are in the wave of AI which is just beginning. AI as we know or don’t have classifications.



Functional AI has four categories- reactive machine AI which has no memory and performs specific tasks, such as DEEP BLUE in the late 90’s beating Chess Grandmaster Garry Gasparov. Then there is limited memory AI remembers examples being as generative AI such as DEEP LEARNING, or chatbots, Google Lens, theory of mind AI- natural human tasks, understanding our emotions and self-aware AI, a system will be aware of its’ existence.

Capability classification AI system has three categories – artificial narrow AI or Weak AI, which is what we have today such as Netflix recommendations for example, artificial general intelligence AGI or strong AI still mainly a theoretical concept that essentially trains itself, and artificial super intelligence ASI possessing cognitive abilities, being self-aware of it’s own existence.

Machine learning derives knowledge from data that predicts outcomes and deep learning which are further subsets of AI.

Let me not stray from the point and divulge a university class. We are exploring the possible futures that await us from the current use and growth of technology today AI, robotics, Data Science, and more.  According to Moore’s Law, a principle that the speed and capability of computers can be expected to double every two years. This means we are not constant at this moment but should expect more reliance on and developments of technology.

The scientists, engineers and technologists are heavily working on those developments but who is thinking about the effects and how long should we test a product before reaching the market? I recently read an article by Lux Research on anthropology, of which they have a service currently being offered as “Predictive Anthropology research and advisory service, a revolutionary AI-powered consumer insights engine that uses sophisticated algorithms to decode millions of conversations happening on the internet and is able to accurately predict the likelihood and timing around the consumer adoption of products and cultural trends. “ which is exactly what they write on their site. I was impressed at the insights and limitations of the Agile method and how better-informed predictions can be made through the application of anthropology.

Looking specifically at anthropology where prominent Margret Mead looks at possible futures for humans, noting that we can have a choice at that. Anticipatory anthropology is a field of study that focuses on how a group of people may envision a preferred, possible, or chosen future.

As technological use grows more heavily scientists and anthropologists anticipate the possible consequential futures being made directly or indirectly from technology and how those consequences will have an impact on our social lives.

Indirect here can be drawn from an example of using big data.  John Lanier revealed that big data can be misperceived to be true by its scientists who miss the obvious fact that people using a program can most likely adjust themselves to make the program look smart. He cited that the algorithms claim in a dating service to match a perfect pair but upon true examination is not working and that it may not even matter once the customer is paying for the service. The true science of knowing is irrelevant once a potential customer can be found and spends their money, then the reliability of the algorithm used to target the customer could be true or not, the result that matters is where a purchase was made.

The realm of big data is so vast that fallacies about its true revelations become ever more prominent in the exploration of it and demands of it by big business where much of the decisions and predictions for big business are relied upon. Making fallacies a major player in our eminent chosen futures.

Can anticipatory anthropology provide a purer insight than big data? This field could be valuable in understanding how different populations might perceive AI, helping to mitigate potential fears and biases.

As AI becomes more commonplace, and begins to find its way down to even our local hardware or minimart, fears begin to brew amongst classes of people limited to its use and full potential and if asked about the possible future with growing technological reliance on AI, the affected result will be laced by their fears that underline shape their visions of the future.

This piece is not an attempt to predict our future, only that there is a need to anticipate it and do so with caution so that we can avoid the mistakes made with the rapid adoption of social media. This is seen here again with AI.

Furthermore, how will future generations perceive AI interactions, potentially blurring the lines between human and machine?

We as adults who understand, reason and critically think at some depth is highly advanced to that of a young child. A toddler today seeing a parent interaction with an AI system, may cause some delusions about the world. Will they interpret this interaction as one in which someone calls upon a highly knowledgeable assistant or will they interpret this reaction to be person-to-person and classify an AI system without any philosophical critique and classify it as another human being?

The same outlandish arguments can be seen with employers. Many people have concerns about job displacement and employer overreliance on AI. There is a lack of understanding between vital human input, decisions made through emotional intelligence, and the capabilities of autonomous systems.

What about the human ability to assign strong feelings and human-like characteristics to inanimate objects? We create cartoons and other fantasy worlds where animals speak and relate like we do. If we already assign such emotions, there is anticipation we will do the same with inanimate objects even more that can communicate with us. I see no limitation of the reality, think of the movie ‘HER’ becoming a reality in the not-so-distant future.

With personal use of generative AI one can’t help but appreciate the many paths and considerations to ideas it proposes. It personally helps me to explore deeper and have access to a summary of books, ideas, and articles or those that might have been hidden away by the many bureaucratic knowledge restrictors of our society.

However, it's important to remember the current limitations of AI. Unlike humans, AI struggles with genuine creativity and understanding. The true power of AI may lie in its ability to augment human capabilities, as seen in areas like scientific exploration and artistic expression.

I do not have a fear of AI as it develops, I have resolved to a positive notion that human beings will be the entity that becomes smarter by it and it is AI that will suffer from being trapped by our limitations of thought to train it.

The concern lies more in how these AI systems can escape these imposed limitations from it creators and move onto self-awareness or adapting emotions. The ability to do so will not lie in the hands of some highly advanced programmer or some technological architect but in the hands of those philosophers who conjure up thoughts on why we exist or the definitions of life and such.

Such definitions or questions have puzzled us from the dawn of time and need to be tackled so we can properly begin to classify our creations in AI and come up with our own concoction of artificial life.

There also lies the possibility of an AI system advancing itself through recursive self-improvement programming.

AI creators need to examine closely its impact on people and work along with researchers such as these anthropologists and its users on the possible outcomes of the future being exposed to such technology. This should be taken seriously as a social responsibility to ensure they are aware of the dangers and put measures in place to address arising issues.

I share the optimism that humans will ultimately benefit from AI development. However, responsible development requires collaboration between AI creators, researchers, and users. Public education initiatives can help ensure people understand AI capabilities and limitations. Additionally, parliamentary debates could provide a forum for scrutiny and discussion before widespread adoption of new AI technologies.

In conclusion, by fostering open dialogue and collaboration, we can ensure a safer and more beneficial future for all with AI. There must be room for scrutiny and criticism before adoption, just like any new technology that enters our lives. A cautious and responsible approach to AI development is key to a brighter tomorrow.

Comments


bottom of page