9/25/2023
Artificial Intelligence (AI) is changing our world in lots of ways. In fact, you interacted with AI without even realizing it. From virtual assistants such as Siri and website chatbots to product recommendations on Amazon, AI is becoming a major part of everyday life. But what exactly is artificial intelligence, and how does it work? What are its limitations and dangers? More importantly, what's in place to ensure that AI benefits us instead of causing harm? This beginner's guide will demystify AI and provide a basic understanding of its capabilities and limitations. It will also provide an outlook on AI’s future.
What is AI?
Before we dive in, let's get the most general question out of the way. AI refers to computer systems or machines designed to perform tasks that would otherwise require human intelligence. To learn how to apply these key concepts and tools can apply directly to your business, check out Introduction to Artificial Intelligence (AI). These systems can analyze data, recognize patterns, learn from experience, and make informed decisions based on data.
A key distinction between the AI we see in science fiction and the AI that works behind many of our apps and tools are the concepts of narrow (or weak) AI and Artificial General Intelligence (AGI) (or strong) AI.
What is AGI in AI?
To best understand AGI, first consider the current AI, Narrow AI, and consider that AGI is still theoretical. So, our expectations for AGI illustrate what current AI still cannot do.
Narrow AI focuses on performing a single task extremely well, such as playing chess, translating languages, or recognizing images. Narrow AI is considered weak because it can only perform within specific parameters. This type of AI exists today and can outperform humans in many tasks.
Artificial General Intelligence (AGI) is a hypothetical AI system exhibiting human-level intelligence and capabilities across various domains. This type of versatile AI is theorized to be able to adapt to changing surroundings and conditions and be as effective as humans in problem-solving. AGI is yet to exist. To learn and think like a human, AGI would require self-awareness and consciousness, which raises an entire field of ethical questions. We would need significant leaps in AGI (and hopefully ethical maturity as a collection of societies) before Star Trek's Data or Iron Man's Jarvis becomes a reality.
How Does AI Work?
Traditional programming involves feeding a computer explicit step-by-step instructions, but machine-learning takes a different approach - allowing computers to learn from data without being explicitly programmed.
Let's take a quick overview of how it works:
- Training data is fed into a machine-learning algorithm. This data trains the system, e.g., labeled images to teach a system to recognize cats.
- The algorithm analyzes the data to find patterns and relationships, adjusting its internal parameters to optimize pattern detection.
- Now that it's optimized, the model can predict or decide things using new, unseen data.
- Over time, the system continues to learn from new data to improve its model.
This iterative approach allows machines to learn and improve without explicit programming. Neural networks, illustrating elements of human brain functionality, are adept at discerning patterns and distinct features within data. Notably, deep learning—a specialized sub-field of machine-learning—has gained considerable recognition and progress, propelled by the advent of vast computational resources. The enhancements in both neural networks and machine-learning are pivotal contributors to the recent rise of AI applications. Such networks empower machines to emulate human decision-making processes.
What can AI do nowadays, and where are its limits?
AI, trained in an existing library of content, excels at tasks we struggle with and can execute multiple tasks with unorganized data. It can do things like pattern recognition in data, make predictions and forecasts, create highly detailed and realistic images from text prompts, or solve complicated math problems from a photo. AI is all around us, often working quietly in the background to enhance our experience.
How is AI helpful?
- Virtual assistants like Siri, Alexa and Google Assistant understand verbal commands and questions.
- AI powers product recommendations on Amazon and Netflix to suggest purchases and shows based on your browsing history.
- AI aids doctors by analyzing medical images to detect tumors, abnormalities, and diseases.
- AI can replace mundane tasks that used to be obstacles for people, like drafting appeals letters to insurance claims denials
- AI can help workers analyze pay equity within workforces or across similar industries.
- Chatbots provide customer service through conversational interfaces.
- AI tools offer thorough and fast analytics of complicated data, including opposition research in a competitive marketplace.
What are some disadvantages?
- Lack of generalized intelligence of humans.
- Lack of common sense that most humans develop by 11 months of age.
- Autonomous vehicles’ ability to use environmental sensors and AI to navigate without human input has been overstated with significant consequences.
- Susceptibility to bias in training data.
- Reliance on large training datasets of new data.
While narrow AI can outperform humans for most specialized tasks, generalized intelligence across different domains remains elusive. For example, try asking AI to write a movie script. Without human guidance or context, AI can only understand that actions have outcomes, and those outcomes are only associated with groups of words, not consequences.
Dangers of AI, Bias, and Model-Collapse
Although AI can offer unprecedented value and services to previously underserved populations, it can also suffer from unique problems. For example, suppose a large language model is fed flawed data, resulting in machine-learning bias. In that case, its answers will be skewed, and it will be up to the end-user to recognize the error. Additionally, suppose AI is trained on content created by itself or other AIs. In that case, this can lead to muddy results in subsequent generations of the AI's model, also known as model collapse. Moreover, model drift describes when a once-accurate model becomes less accurate over time because the world changes around the model, and the model doesn't keep up. It is possible to mitigate each of these AI dangers.
What is bias in machine-learning?
Machine-Learning Bias occurs when the algorithm produces prejudiced results because of poor assumptions in any part of the ML process. Poor assumptions can be in the design of the algorithm, in the way the learning sample was collected, or systemic of the society in which the data and the algorithm exists.
- An example occurring early in the learning phase is Sample Bias. The sample may need to be bigger and more representative to teach AI how the real world looks.
- Prejudice Bias comes from real-world prejudices, which influence how the data is created. The source materials are made with poor assumptions and exclude outliers found in real-world populations. The source material must be made with better assumptions and exclude outliers in real-world populations. For example, a stock photography company may have only hired non-disabled models for teachers in their shoots, so an AI trained in these photos will tell its users that wheelchair users are not teachers.
- If the modeler leaves the data out because they don’t see the outliers or the dimension as necessary, then this is Exclusion Bias.
- Algorithmic Bias is built into the model’s code and happens during the computation process. It may push specific results over others. This may be a good thing when done for safety. For example, it is generally accepted that an AI should be designed so that it cannot instruct users how to create weapons and bring violence to others or teach hate speech.
How to prevent algorithmic bias?
- The Power of Awareness: Recognizing and genuinely understanding the biases within our AI systems is the first pivotal step towards creating more ethically sound and precise AI solutions.
- Intervention in Action: We should consider a multi-pronged approach to curtail these biases actively. This includes collecting data from diverse sources, inviting third-party experts for impartial audits, and soliciting valuable insights from the wider community.
- Treading the Regulatory Path: Interestingly, some biases, particularly Algorithmic Bias, might be purposefully integrated as a shield for society's greater good. Yet, it's crucial for us to be conscientious and fully grasp the broader ramifications of such choices.
What is model collapse?
Model collapse can happen quickly, sometimes as early as the second or third round of model training. It happens when a machine-learning model starts learning from other AI outputs instead of real human data, making the results less diverse and reliable. Think of it like making copies of a picture until you can't recognize the original anymore. Eventually, the first-generation learning data is purged and replaced with blurry copies to make new generations.
What is model drift?
Model drift is an intriguing phenomenon that affects the accuracy of AI systems. Let's dive into a few reasons why this might happen:
- Environmental Shifts: Just as changing seasons can alter the landscape, shifts in the environment can impact a model's performance. Imagine an AI trained to predict the weather. If there's a significant change in the climate over the years, our trusty AI might not be as spot-on with its forecasts.
- Evolving Relationships: Over time, the dynamics between certain variables may evolve - a phenomenon known as concept drift. Consider an AI model gauging consumer product demand. Our model's predictions may not align with actual outcomes if societal trends alter buying behaviors.
- Data Collection Modifications: Sometimes, the method of collecting data or its unit might change, causing a model to misinterpret information. For instance, if a system learns from weight data in kilograms and suddenly receives input in grams, there's bound to be some confusion.
Think of how a cherished photo of your young pet might not resemble them as they grow older. Similarly, a model can stray from accuracy if the variables it was trained on evolve. To maintain reliability, continuous updates and refinements are essential in AI models.
Trading Labor for Hype?
AI can harm itself by cannibalizing its own output and not recognizing bias in its system. However, the greater danger might be overvaluing recycled output and using it to replace human capital. Merriam-Webster defines human capital as "the skills, knowledge, and qualifications of a person, group, or workforce that are considered economic assets." Although AI has neither skills, knowledge, nor qualifications, concerns that an overvalued AI will undervalue labor are most evident along the WGA and SAG-AFTRA picket lines of 2023.
The head of British renewable energy group, Octopus Energy Ltd. (ORIT.L), told the BBC in July 2023 that “its customers prefer emails written by AI rather than his own staff,” a practice deeply worrying trade unions in the UK. He had previously written in early May for The Times of London that they have been "experimenting with AI for months."
The technology has its detractors. Noam Chomsky once likened ChatGPT to “auto-complete on steroids,” and some would argue the term generative AI misleads as its output is merely derivative of existing work. In that, AI needs people to stay healthy. We risk model collapse without people creating original and diverse work to train AI.
Then there's the risk to everyone's spending power if services and labor are replaced by AI. Just imagine, if artificial intelligence leaves large sectors of the economy unemployed or underemployed, who can afford the benefits of an AI future?
The State of AI Policy and Governance
In March 2023, Tech Venture Capitalist Elon Musk signed a letter detailing the hypothetical risks of super-smart AI and called for a pause on AI technology, citing “profound risks to society." However, during his public pause, Musk quietly incorporated an AI company, xAI, and recruited top minds from OpenAI, Google Research, Microsoft Research, and DeepMind.
In May 2023, OpenAI’s CEO, Sam Altman, testified before the US Senate Judiciary Committee that government intervention “will be critical to mitigate the risks of increasingly powerful” AI systems. As of September 2023, a handful of AI companies in the US have agreed to follow a handful of voluntary safeguards around the technology after a closed-door meeting with President Joe Biden and Vice President Kamala Harris.
European Union lawmakers passed the AI Act in June 2023 to mitigate the highest-risk applications of AI technology in the 27-nation bloc. Further, the United Nations is researching its options for global policy, looking towards a model based on how it formed the International Atomic Energy Agency or the Intergovernmental Panel on Climate Change.
AI industry leaders have publicly called for regulation. Still, there has been little movement in the US other than a few congressional hearings and Executive Branch speeches. There are concerns over labor, privacy, use for impersonation and cyber-hacking. The recent Defcon conference was challenged to break into chat AI systems as quickly as possible, recovering credit card numbers and getting instructions for causing violence.
In September 2023, the most prominent technology executives in the US reaffirmed their commitment to regulation before a closed-door forum at the US Senate. Still, there needs to be more consensus on what that regulation could be.
The Future of AI
Experts predict that AI capabilities will continue to advance rapidly and will make breakthroughs in a wide range of areas.
Some high-growth areas include:
- Increased human-like reasoning and problem-solving.
- Personalized education and healthcare.
- Further automation of routine physical and cognitive tasks.
- Integration of AI in more everyday objects through the Internet of Things.
However, with significant advancement in AI comes greater risk:
- Job disruption and effects on employment.
- Lack of transparency in AI decision-making.
- Data privacy and cybersecurity vulnerabilities.
- Exacerbation of existing biases and inequality.
Experts believe these risks will require proactive governance to craft policies that maximize the benefits of AI while minimizing downsides.
The Takeaway
This introduction covered essential aspects of artificial intelligence. While the technology has enormous transformative potential for individuals and organizations, it is still in its early stages of development. Artificial intelligence rapidly develops and provides opportunities in many areas, including business, science, and everyday life. While the long-term outcomes are uncertain, understanding the basics of AI will allow you to grasp its future implications better.
Explore ways to use the tools to imagine new ways to perform in your role. I recommend looking at Learning Tree's ChatGPT for Business Users: On-Demand if you're just starting to explore or Prompt Engineering for Business Users Course if you're already been using AI as a solution in business. And in doing so, you can learn how to make AI work for you, not against you.