Allen Antoine


The present “age of technology” is interchangeable with the “Information Age,” beginning in 1950. As the beginning of any historical era is nebulous at best, it arguably started three years earlier with the invention of the transistor, invented by William Shockley, John Bardeen and Walter Brattain of Bell Labs. Shockley later became famous for racist views about genetics. At any rate, this begat personal computing, the internet, the World Wide Web, and so on. With each generational advancement came outcries of impending doom and the end of civilization, at least the one we’re used to.

The fear of new or strange gadgetry or scientific advances is nothing new. Officially known as “technophobia,” it is regarded as a specific phobia that is traced back to that period of mass invention and manufacturing advancement called the Industrial Revolution. Its subdivisions include cyberphobia, the fear specifically of computers; mechanophobia, the fear of machinery; and digital anxiety, an irrational fear of using these devices or relying on them. Of course more common is the overreliance on them, specifically cell phones, but that is a broad subject worthy of a separate discussion by itself.

Black people specifically have a special fear of technology since, historically, these advances have not been conceived for their benefit; indeed, they have often been weaponized against people of color. This too has spawned a name, “techno-racism,” the idea that systematic racism has been encrypted into software to perpetuate the oppression forced upon them for hundreds of years.

The Next Big Thing
“Historically, when new technologies emerge, communities that are not included in their development often experience the consequences without having a voice in shaping them.”
—Computer Scientist Allen Antoine

The latest advancement, artificial intelligence (AI), has no specific starting point either, but for argument’s sake we’ll start with 1956 when the term was first conceived at Dartmouth University. During a brainstorming conference there, the idea of “thinking machines” was raised, and research and development topics emerged, including its programming language in 1957. In short order, 1966 saw the appearance of a “chatterbox” (shortened to “Chatbox”), an autonomous vehicle in 1979 (followed by a driverless car in 1986), Deep Blue, a world chess program (1997), and speech recognition software by the end of the millennium.

Fast forward two and a half decades later, and innovations have seeped into everyday life, often without our knowledge. Along with this are concerns about ethical issues, among them the possibility of autonomously operated weapons of war introduced into the maelstrom of global conflict without the direct instruction of human operators.

This is the next step up from the deployment of pilotless drones.

AI safety researcher Dr. Heidy Khlaaf is studying the implications of artificial intelligence on society as a whole and believes that confusion surrounding AI is the result of advertising and public relations firms eager to make outlandish claims without sufficient research to back up their statements. New applications such as this come with an inherent amount of uncertainly.

“I’m trying to use my position … to essentially bring these risks forward in a public way for people to understand,” she said.

To take away some of the anxiety, Allen Antoine is a good choice. With feet in the education and technological sectors, he shifts the conversation away from global politics into the confines of home.

“Artificial intelligence is the ability of computer systems to perform tasks that typically require human intelligence—things like recognizing patterns, understanding language, making predictions, and generating new content,” he says, speaking from a vantage point of unmasking the secrets of the digital realm to elementary children, up to the bastion of the University of Texas at Austin.

There he holds the lengthy title of Director of Computer Science Education Strategy for EPIC (Expanding Pathways in Computing).

“Most people are already interacting with AI whether they realize it or not,” he continues in an effort to bring the conversation down a few notches.

“When your phone suggests the next word in a text message, when Netflix recommends a movie, when a bank detects fraud, or when a navigation app finds the fastest route—that’s AI at work.”

Ethics and Inequality
“With any new technology, we always need to be thinking about what’s coming next. But AI is moving so fast that it’s difficult to grasp how significantly it’s going to change things.”
— David Luxton, PhD, clinical psychologist at the University of Washington Medical School.

Concerns about the application of AI center around criminal justice; healthcare and the medical field; and human resources, especially equitable employment opportunities for groups traditionally discriminated against.

Software Engineer Deandre Cole is a proud son of South Central who spent his formative years writing software for mammoth corporations, honing his skills to the point where he can work from the confines of home.

Here he toils at writing code for his own personal enjoyment and career advancement in a house wired for the ages. A.I. is ever present, as his kids use Amazon for homework and recreation.

“AI is basically computer technology that tries to do things that normally need a human brain, like answering questions, recognizing images, writing, planning, or spotting patterns,” he explains.

“For the average person, AI is already showing up in everyday life, even if they do not always notice it,” acknowledging the role it has assumed in his own life.
When pressed further, he acknowledges the apprehension associated with this latest development.

“…it is not just a new gadget.”

“It messes with power, jobs, truth, and control. People argue about AI because it can help a lot, but it can also cause real harm. The same tool that helps someone write a resume can also be used to scam people, spread fake news, or flood the internet with junk. It threatens people’s sense of security.”

Antoine agrees.

“Whenever technology changes how work is done, people understandably worry about what that means for employment. Second, there are ethical concerns, particularly around bias. AI systems learn from data, and if that data reflects historical inequalities, those biases can show up in the technology. We’ve seen examples in areas like facial recognition, hiring systems, and lending algorithms.”

As an involved observer, Cole has concerns about the safeguards developed as an adjutant to this developing technology.

“Rules and regulations regarding its usage need to move faster in protecting privacy, data, and the way it can legally be used to prevent predatory behaviors.”

Both men dismiss the idea that AI presents a special threat to Black people.

“Its implications are huge, just as they are for people of other races,” Cole believes.
Antoine stresses active participation in growing with the industry, in light of his history mentoring children in Houston, TX.

“The implication for the African American community is clear: we need to be builders, not just users of AI. That means increasing representation in computer science, data science, and AI-related fields. It means ensuring that Black students have access to quality computing education. And it means having diverse voices involved in the conversations about ethics, policy, and design.”

Leave a comment

Your email address will not be published. Required fields are marked *