Skip to main content

What is AI?

The Human's Handbook to Computers that Think

August 25, 2020 | by: Ryan Duffy

Robotic arm and Earth
“Computers are useless. They can only give you answers.”
— Pablo Picasso, 1968
"Artificial intelligence is the future, not only for Russia, but for all humankind....Whoever becomes the leader in this sphere will become the ruler of the world."
— Vladimir Putin, 2017

Computers got a lot smarter between 1968 and 2017. Artificial intelligence is not new, but it’s increasingly influential. We’ll return to definitions later, but for now, think of AI as the capacity of a machine to simulate human intelligence.

AI is already ubiquitous in your day-to-day life, ranking blue links on Google searches, blocking spam from your work inbox, providing your boss with marketing and sales leads, suggesting Amazon products and Netflix shows, sorting Facebook and TikTok feeds, and navigating you from📍A to📍B. That’s just the tip of the iceberg.

Now that we have your attention, we’ll turn down the galaxy-brain knob a bit. This guide provides the overview of what you need to know about AI today. No more, no less.

Despite how far it’s come, AI is far from general intelligence or its anthropomorphized pop culture depictions.

I. How to Conduct Your Own AI Sniff Tests

Not everyone agrees on what’s considered AI. The goalposts are constantly shifting. We have five concepts that will help you be discerning in the real world.

Catch ‘em all: The field of “AI” is a catch-all computer science category, composed of tools and techniques that vary in sophistication. The field has grown and changed over the decades. The quest to engineer ever-smarter machines encompasses philosophy, biology, logic, neuroscience, and evolution. AI is a sticky term that ends up applied to bits of all of these disciplines, rightly or wrongly.

The AI effect: Also known as the “odd paradox,” this essentially means that a software technique loses its AI label once it becomes mainstream. According to this line of thinking, AI is only any task that machines can’t do yet. If a machine can do it, it’s not AI anymore.

And to clarify a few misconceptions:

AI isn’t inherently unbiased: In the U.S., the AI community skews white and male. This affects how AI systems are built and designed, as well as what training data they’re fed. Data can often be fundamentally biased itself. When bias creeps into algorithms, it can reinforce and even accelerate existing inequalities—especially in regard to race and gender. Ethical AI is a rapidly growing subdiscipline, which we’ll explore later.

For now, we’ll leave you with a story: In 2016, research scientist Timnit Gebru attended NeurIPS, a prestigious machine learning and computational neuroscience conference. She counted five Black attendees in the crowd of ~5,500 researchers. She says Black attendees’ representation at NeurIPS has increased but that it’s still relatively low.

AI ≠ full automation: Autonomy is a machine’s ability to do a task on its own. But it’s not a binary—it’s a spectrum. A system becomes more autonomous as it tackles more complex tasks in less structured environments.

  • Automatic systems can handle simple tasks, typically framed in terms of Yes/No.
  • Automated systems can handle more complex tasks, but in relatively structured environments.
  • Autonomous systems can perform tasks in unstructured, complex environments without constant input or guidance from a user.

Stay up to date on emerging tech

Drones, automation, AI, and more. The technologies that will shape the future of business, all in one newsletter.

Sign Up

A case study from cars: Automatic systems (transmission, airbags) do their thing after a certain trigger. Automated systems (Tesla’s Autopilot or GM’s Super Cruise) handle specific driving functions and must have human oversight. A fully autonomous vehicle can sense, decide, and act without human intervention. Just enter the destination.

Snake oil: One programmer’s AI may be another’s linear regression. Some startups, marketers, and sales departments are keen to exploit the fluidity of AI as a concept, dressing products up as “AI-enabled” even when it’s not true.

Companies have exaggerated the degree of automation even when their software still has mostly or only humans in the loop. And a 2019 survey found that 40% of European “AI startups” didn’t actually use the technology.

AI policy analyst and researcher Mutale Nkonde told us, “The truth is that much of what we’re buying is snake oil. We’re prepared to buy it because it taps into this fantastical piece of our brain, but we need to be very very suspicious of something that we cannot audit. And until those audit processes are in place, we shouldn’t assume that it does what it says it can do.”


II. Machines Go to School: A Brief History

AI hype and rosy exuberance are nothing new. In periods of retrenchment—famously known as “AI winters”—government funding and private investment in basic research dried up. Algorithmic innovation and performance plateaued. Media and the general public lost interest.

And while winter comes for AI, so does spring: AI innovation skyrocketed in the 2010s. This timeline captures just a fraction of recent AI developments.

Check it out

III. Computers that see, hear, sense, and speak

AI is a grab bag of many techniques and terms. Here we’ll provide clarity about what matters for the business world. We’ll start with the key definitions first.

Algorithm

A set of instructions that tell a computer what to do. Each algorithm has an input and output.

Artificial intelligence

The science and engineering of creating intelligent machines that can achieve human-like goals, per AI grandfather John McCarthy.

In the early days, researchers focused on symbolic systems, or GOFAI, good old-fashioned AI. Programmers encode knowledge and logic in human-readable syntax. Expert systems, a spin-off of symbolic systems, attempt to emulate a human expert’s decision-making process using rules and if-then statements.

We can understand the decision-making process of symbolic systems. But because they must be hand-coded, GOFAI systems cannot make more complex decisions—or handle many current AI applications. So let’s move on to the techniques en vogue.

Machine learning

This is the key subset of AI, and frankly, why we’ve gathered you here today. ML enables systems to find patterns, make predictions, and draw conclusions without explicit programming. An ML system can do its thing without its minders needing to hand-code every rule. With any luck, ML researchers train their algorithms, deploy them, and let the system “learn” from datasets. ML has many subsets:

  • Supervised learning: You feed the algorithm loads of labeled training data, i.e. an image of a dog with the annotation “dog.” Supervised learning systems learn to associate inputs with the correct output.
  • Unsupervised learning: You give the algorithm unstructured, unlabeled data and it makes inferences on its own. A common use case is clustering, where input data is divided into groups or patterns and rated on similarity. In more plain terms, this could help you find overlap between friends on a social graph.
  • Semi-supervised learning: Semi-supervised learning systems can handle datasets that are noisy or missing many labels.
  • Reinforcement learning: A program that learns by trial-and-error, and feedback, in dynamic environments. RL agents are commonly found acing multiplayer games or guiding robots.
  • Transfer learning: A model trained for one task that is reused as a starting point for a separate task.

Neural nets

Short for neural networks, these are multi-layered mathematical constructions inspired by our brains’ adaptable neurons. Neural nets have layers of nodes. As the data moves through different layers, the program extracts different information and finds patterns.

Neural nets have the architecture that has turbocharged machine learning applications over the past decade.

Natural language processing

NLP is fundamentally about machines digesting plainspoken human language. Neural nets are a godsend for NLP, boosting machines’ ability to transcribe, translate, generate speech, and summarize. NLP subsets include natural language understanding (NLU) and generation (NLG).

Computer vision

Computer vision teaches computers to see or, in more technical terms, analyze and process the visual world. At a granular level, computer vision is parsing pixels and inferring what object is represented by a cluster of colors.

Generative adversarial networks

First developed in 2014, GANs pit neural networks against each other. One network serves up content (typically videos or images), which the second evaluates as real or fake. The networks can work together to generate synthetic audiovisual outputs that look realistic: artwork, video, and of course—deepfakes.

Deep learning

This is a subfield of ML that pairs well with neural nets and unsupervised learning techniques. Deep learning is all about scale, excelling with vast quantities of high-quality data and compute power.

Deep learning algorithms have beat humans in Go, the abstract strategy board game, and power the world’s top image recognition, voice recognition, and translation applications. The technique burst onto the scene in the last 10–20 years, thanks to a confluence of tailwinds.

Deep learning part 2: Algorithmic innovation

It's difficult to objectively track innovation in algorithmic design, but we can use proxies. From 1998 to 2019, the volume of peer-reviewed AI publications grew over 300 percent. In 2019, ~13,500 experts attended NeurIPS, a top AI conference—8x the 2012 crowd.

Deep learning part 3: Data

Data: Thanks to social media, a growing constellation of IoT sensors, ubiquitous mobile phones, and much more, our global datasphere is proliferating. The International Data Corporation (IDC) predicts it will grow to 175 zettabytes (ZB) in 2025 from 33 ZB in 2018. (Reminder: more data ≠ useful, structured, and/or unbiased data).

Deep learning requires a large volume of labeled data and specialized processors. Algorithms must be refined and updated after deployment to maintain performance.

Deep learning part 4: Specialized hardware

Initially designed for graphics-intensive applications such as video production and computer-aided design, graphics processing units (GPUs) were in the right place at the right time. They became the de facto workhorses for deep learning applications due to their proficiency with parallel computing, which neural nets need.

Nvidia has captured much of the value from this trend, leveraging its GPUs, for AI, data centers, and autonomous vehicles.

Deep learning part 5: Computing power

Fifty-five years ago, Fairchild Semiconductor R&D chief and future Intel cofounder Gordon Moore published a theorem that became an eponymous law. The number of transistors on integrated circuits double every two years or so. Computing costs drop exponentially.

Bulkier algorithms require more raw power. “The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin,” computer scientist Rich Sutton wrote in 2019. Some researchers have said that more computationally efficient solutions are needed for deep learning.

2019 AI Index

Before we move on, a word on artificial general intelligence, which does not exist. AGI is a theoretical AI system that could perform every human intellectual task at parity with us or (more likely) at a superhuman level. By Emerging Tech Brew’s estimation, we’re still decades away from that tipping point—known as the singularity—assuming it happens at all.

Today’s AI systems are “narrow” or “weak,” meaning they can handle specific problems. That doesn’t mean AI systems can’t cognitively compete with us and/or achieve superhuman performance levels in particular tasks. AI has bested humans in checkers, Jeopardy!, chess, Go, and complex role-playing video games.


IV. Putting AI to work

AI is frequently described as a general platform technology. GPTs, such as electricity and the internet, reshape entire societies, economies, and industries.

While we do believe AI has applications across virtually every industry, we don’t want to keep you here forever. We’ve handpicked 14 industries that AI could reshape. Our methodology = largest total addressable market. Simple as that.

  • Manufacturing
  • Logistics
  • Transportation
  • Healthcare
  • Social Media
  • E-commerce
  • Advertising
  • Cybersecurity
  • Financial Services
  • Insurance
  • Surveillance
  • Government
  • Smart Home
  • Data Labeling
  • Manufacturing

    Decades ago, industrial manufacturers were the earliest adopters of industrial robots, which tended to be simple, automated systems performing repeatable tasks. Today’s robots are more intelligent, equipped with computer vision, cloud connectivity, sensor suites, and bespoke hardware. A new category of intelligent robots (collaborative bots, or cobots) is emerging that can more safely work alongside humans in unstructured environments.

    Manufacturing also has non-obvious AI applications. Manufacturers use ML systems for predictive maintenance and quality assurance. Digitized factory operators can use AI to predict possible disruptions and optimize operational efficiency, which includes controversial tactics like surveilling and algorithmically managing floor workers.

    Big picture: As global demand for goods climbs, so will the robotization of factories. More manufacturing automation could improve quality while decreasing costs. It could also displace up to 20 million jobs globally by 2030, which will disproportionately impact poorer and more rural countries.


    V. The key players

    At a geopolitical level, competition has been a primary driver of government AI investment and strategy. The world’s top two economies are also its AI superpowers.

    It’s difficult to quantify AI sophistication, but talent is a good proxy. The U.S. has 59% of the world’s top-tier AI researchers, while China has 11%, according to MacroPolo.

    China has invested billions to reach technological parity with the U.S.. In 2017, China released a national strategy called the New Generation AI Development Plan. In the “Made in China 2025” industrial strategy blueprint, the country said it aims to be the global AI leader by 2030.

    • “AI will be the first GPT in the modern era in which China stands shoulder to shoulder with the West in both advancing and applying the technology. During the eras of industrialization, electrification, and computerization, China lagged so far behind that its people could contribute little, if anything, to the field,” AI expert Kai-Fu Lee predicts in his book.

    China is competitive with the U.S. in AI commercialization in 2020. But it’s not driving as many fundamental research breakthroughs—and likely needs until 2025 before it could reach a tipping point. The countries have distinct competitive advantages:

    • U.S.: Highest quality algorithms, best talent pool, superior R&D. Also, the key AI hardware sectors—like semiconductors—are dominated by U.S. companies.
    • China: Fewer privacy constraints, largest Internet market by users, huge state subsidies, and a more coherent overarching strategy.

    The EU’s domestic tech sector is not as robust as its American and Chinese counterparts. But the continent is a top producer of the best AI researchers, a key market for tech companies, and a powerful regulatory bloc creating its own rules for data and AI governance.

    At a macroeconomic level, AI can boost productivity and wealth. It also increases job destruction and inequality. The U.S.’ Rust Belt corridor shows the physical pains of the country losing 5 million manufacturing jobs since 2000, due to the twin forces of automation and globalization. Midwest states also have the U.S.’ highest rates of robot density.

    What’s new with today’s intelligent automation? The scope of physical and cognitive work that can be automated. Some jobs could be engineered into obsolescence, although most will be reskilled and not outright deskilled. In 2017, McKinsey predicted 15% of the global workforce’s “current activities” will be automated by 2030.

    Wharton School at University of Pennsylvania professor Lindsey Cameron told Emerging Tech Brew that when cars arrived on the scene, “people wondered what was going to happen to blacksmiths, and what happened was displacement but then eventually the creation of new jobs. And I think in the long run, that’s what I see with AI.”

    “There will be a lot of new jobs created, but in the short term there will also be a lot of pain because upskilling can’t happen in time.”

    At a social level, we’re all key players. We’re end users of AI, but we’re also subject to the whims of imperfect algorithms. AI is used to identify potential terrorists, make hiring decisions, set bail, predict criminal recidivism, and recommend medical treatments. When algorithms misfire in high-stakes situations, the consequences are disportionately shouldered by communities of color and women.

    AI ethics experts have ideas for mitigating risk: Companies should build diverse technical teams and open up their algorithms for audits and independent oversight. Technologists should use data reflective of an entire population when they train and deploy algorithms.

    • The EU is moving to more closely scrutinize and regulate AI in high-risk applications like healthcare, policing, and transportation.
    • In the U.S., the Pentagon adopted five ethical principles to steer its AI usage—it needs to be responsible, equitable, traceable, reliable, and governable.
    • The EU’s GDPR established a legal framework for data processing and obtaining user consent. Some U.S. politicians believe AI vendors should obtain consent from users before processing sensitive personal or biometric data.

    Nkonde, who worked on the Algorithmic Accountability Act, told us regulation is necessary to reign in harmful AI, both around what can be released into the marketplace and around algorithmic transparency.

    “Transparency in terms of consumer products is very important,” she said. “When we’re dealing with algorithms I think it’s unfair to make people into computer scientists in order to understand what they’re buying. When we’re thinking through AI systems it needs to be something that’s really explainable—three points or less—that speaks to the social impact.”


    VI. One hundred years of AI

    When world leaders invoke AI, they often describe its impact at a civilizational level. Executives from Silicon Valley to Shenzhen are equally animated when discussing the technology. That’s not a coincidence—the world’s leading technology firms are all AI powerhouses.

    Investors are quick to fund new entrepreneurs in the space. In the second quarter of 2020, U.S. AI startups received $4.2 billion in funding, per CB Insights. Chinese companies received nearly $1.4 billion.

    All this activity is a giant leap from the 1950s, when “artificial intelligence” was aspirationally coined on the leafy campus of Dartmouth. Today’s deep learning and neural nets required many decades of if-then statements, iterations, and new techniques. And yes, today’s AI systems are narrow, flawed, and at times harmful. But they’re layered across more devices, services, and businesses than ever before.

    The people (or robots) writing the history books in 2050 probably won’t link a superpower’s rise and fall to its AI strategy. But they’ll definitely dissect AI’s technological disruption of jobs, economies, and societies. That narrative will have some good and some bad, but it’s truly impossible to predict.

    Final Thoughts

    The people (or robots) writing the history books in 2050 probably won’t link a superpower’s rise and fall to its AI strategy. But they’ll definitely dissect AI’s technological disruption of jobs, economies, and societies. That narrative will have some good and some bad, but it’s truly impossible to predict.

    Keep up with the innovative tech transforming business

    Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.