inside sources print logo
Get up to date Delaware Valley news in your inbox

Not Much ‘Intelligence’ in Biden’s AI Executive Order, Experts Say

It may be the only presidential executive order ever inspired by Tom Cruise.

Earlier this year at Camp David, President Joe Biden settled down to watch the latest “Mission Impossible” movie installment. In the film, Cruise and his IMF team face off against a rogue, sentient artificial intelligence (AI) that threatens all of humanity. The movie reportedly left the president shaken.

“If he hadn’t already been concerned about what could go wrong with AI before that movie, he saw plenty more to worry about,” said White House Deputy Chief of Staff Bruce Reed.

Reed said Biden is “impressed and alarmed” by what he has seen from AI. “He saw fake AI images of himself, of his dog… he’s seen and heard the incredible and terrifying technology of voice cloning, which can take three seconds of your voice and turn it into an entire fake conversation.”

Given that the cutting-edge communications technology when Biden was born was black-and-white television, it’s no surprise he is awed by the power of AI. His order adds additional layers of government regulation to this quickly developing, cutting-edge tech.

For example, Biden will “require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government.” It also instructs federal agencies to establish guidelines “for developing and deploying safe, secure, and trustworthy AI systems.” And companies doing that cutting-edge work that can keep the U.S. competitive with China are required to notify the federal government when they are training their models and share the results of “red-team safety tests.”

According to Reason Magazine’s science reporter Ronald Bailey, “Red-teaming is the practice of creating adversarial squads of hackers to attack AI systems with the goal of uncovering weaknesses, biases, and security flaws. As it happens, the leading AI tech companies— OpenAI, Google, Meta—have been red-teaming their models all along.”

Tech experts say that movies from the multiplex aren’t necessarily the best sources for setting public policy.

“I just don’t understand why that’s where people’s heads go,” Shane Tews, a cybersecurity Nonresident Senior Fellow at the American Enterprise Institute, said. “And I do think it really is because they don’t have a lot of examples to replace that ‘Terminator’ feeling.

“I think as we start to understand how to have a better human-machine relationship, that it’ll work better,” Tews added. “I’m not sure that people will necessarily get it, but people are kind of in a crazy moment in their heads right now.”

The White House has embraced the crazy, critics say, spreading fear about existential threats.

“When people around the world cannot discern fact from fiction because of a flood of AI-enabled mis- and disinformation, I ask, is that not existential for democracy?” said Vice President Kamala Harris during a speech at the United Kingdom’s AI Safety Summit. In her mind, a variety of issues need to be addressed to manage the growing AI industry. “We must consider and address the full spectrum of AI risk — threats to humanity as a whole, as well as threats to individuals, communities, to our institutions, and to our most vulnerable populations.”

Biden’s proposal was praised by the Electronic Frontier Foundation. The digital rights nonprofit supports regulating the use of AI in certain situations, like housing, but not the technology itself.

“AI has extraordinary potential to change lives, for good and ill,” Karen Gullo, an EFF analyst, said. “We’re glad to see the White House calling out algorithmic discrimination in housing, criminal justice, policing, the workplace, and other areas where bias in AI promotes inequity in outcomes. We’re also glad to see that the executive order calls for strengthening privacy-preserving technologies and cryptographic tools. The order is full of ‘guidance’ and ‘best practices,’ so only time will tell how it’s implemented.”

Other technology policy analysts, like Adam Thierer, panned the executive order.

“Biden’s EO is a blueprint for back door computing regulation that could stifle America’s technology base,” said Thierer, a resident senior fellow, Technology & Innovation, for R Street Institute. “It will accelerate bureaucratic micro-management of crucial technologies that we need to be promoting, not punishing, as we begin a global race with China and other nations for supremacy in next-generation information and computational capabilities.”

He has argued people shouldn’t be pessimistic about new technology or attempt to go into what he calls “technological stasis.”

And Doug Kelly, CEO of the American Edge Project, said the temptation of Washington, D.C., to overregulate could cost the U.S. economy.

“A report by PwC estimates the global economic effect of AI to be $14 trillion by 2030, with China and North America projected to be the biggest winners in terms of economic gain,” Kelly wrote for DCJournal. “But if regulators in Washington or Europe tie the hands of AI innovators, we run the risk of falling behind in AI leadership to China, which has invested hundreds of billions in an attempt to gain global AI superiority.”

AI is already used in autocorrect, autocomplete, chatbots, translation software, and programmable things like robot vacuums or automatic coffee makers. Popular sci-fi characters like C-3PO, R2D2, and JARVIS are also AI-driven.

Tews suggests people get frustrated when technology doesn’t work as it should. “I think people project that onto what happens when I get into these situations where it’s me and this machine that’s not acting appropriately – I’m not getting the net result of what I want, and I don’t want there to be more of that my life.”

KING: Artificial Intelligence — the Greatest Disruptor Ever?

To rephrase Leon Trotsky: You may not be interested in artificial intelligence, but artificial intelligence is interested in you.

Suddenly, long-rumored and awaited, AI is upon the world—a world that isn’t ready for the massive and forever disruption it threatens.

AI could be the greatest disruptor in history, surpassing the arrival of the printing press, the steam engine, and electricity. Those all led to good things.

At this time, the long-term effects of AI are just speculative, but they could be terrifying, throwing tens of millions out of work and making a mockery of truth, rendering pictures and printed words unreliable.

There is no common view on the impact of AI on employment. When I ask, the scientists working on it point to the false fears that once greeted automation. In reality, jobs swelled as new products needed new workers.

My feeling is that the job scenario has yet to be proven with AI. Automation added to work by making old work more efficient and creating things never before enjoyed, and, in the process, opening up new worlds of work.

AI, it seems to me, is all set to subtract from employment, but there is no guarantee it will create great, new avenues of work.

An odd development, spurred by AI, might be in a revival of unionism. More people might want to join a union in the hope that this will offer job security.

The endangered people are those who do less-skilled tasks, like warehouse laborers or fast-food servers. Already Wendy’s, the fast-food chain, is working to replace order-takers in the drive- through lanes with AI-operated systems, mimicking human beings.

Also threatened are those who may find AI can do much, if not all, of their work as well as they do. They include lawyers, journalists, and musicians.

Here the AI impact could, in theory, augment or replace our culture with new creations; superior symphonies than those composed by Beethoven or better country songs than those by Kris Kristofferson.

I asked the AI-powered Bing search engine a question about Adam Smith, the 18th-century Scottish economist. Back came three perfect paragraphs upon which I couldn’t improve. I was tempted to cut-and-paste them into the article I was writing. It is disturbing to find out you are superfluous.

Even AI’s creators and those who understand the technology are alarmed. In my reporting, they range from John E. Savage, An Wang professor emeritus of computer science at Brown University, to Stuart J. Russell, professor of computer science at the University of California, Berkeley, and one of the preeminent researchers and authors on AI. They both told me that scientists don’t actually know how AI works once it is working. There is general agreement that it should be regulated.

Russell, whose most recent book is “Human Compatible: Artificial Intelligence and the Problem of Control,” was one of a group of prominent leaders who signed an open letter on March 29 urging a six-month pause in AI development until more is understood—leading, perhaps, to regulation.

And there’s one rub: How do you regulate AI? Having decided how to regulate AI, how would it be policed? By its nature, AI is amorphous and ubiquitous. Who would punish the violators and how?

The public became truly aware of AI as recently as March 14 with the launch of GPT-4, the successor to GPT-3, which is the technology behind the chatbot ChatGPT. Billions of people went online to test it, including me.

The chatbot answered most of the questions I asked it more or less accurately, but often with some glaring error. It did find out about a friend of my teenage years, but she was from an aristocratic English family, so there was a paper trail for it to unearth.

Berkeley’s Russell told me that he thinks AI will make 2023 a seminal year “like 1066  [the Norman Conquest of England].”

That is another way of saying we are balanced on the knife-edge of history.

Of course, you could end AI, but you would have to get rid of electricity — hardly an option.

Conshohocken-Based ZeroEyes Watches Out for Guns in Schools

No parent can completely dismiss the possibility of their child’s school being the scene of a shooting incident. The tragic school shooting in Uvalde, Texas still reverberates.

Various theories have been espoused on how to curtail mass shootings. Sam Alaimo has put his theories into practice.

Alaimo is the founder and chief financial officer of ZeroEyes, a Conshohocken-based security firm that has developed technology designed to detect the presence of a gun, minimize the risk of a mass shooting and keep children safe using AI (artificial intelligence). The company is the only creator of the only A.I.-based gun detection video analytics platform that holds the U.S. Department of Homeland Security SAFETY Act Designation.

Alaimo, a former Navy SEAL, helped launch the company in 2018 as an effort to protect children and their teachers.

Sam Alaimo

“I met our CEO, Mike Lahiff, on a SEAL team over a decade ago,” Alaimo recalls. “He was at his daughter’s school in 2018 after the Parkland shooting and he noticed all the physical security cameras there that were not proactive. They just sat there and did nothing.”

Alaimo, Lahiff, and their colleagues spent two years developing a system that would detect guns more efficiently than existing technology while still being cost-effective. The process included a rigorous testing protocol.

“We brought in cameras,” Alaimo said. “We used different angles different lighting conditions, different guns, different hand sizes, every different variable we could think of. That ended up being the unique difference between us and the other companies.”

That is one element of the ZeroEyes system. Then there is the human element; a team of observers who monitor and analyze the information provided by the system of cameras.

To protect clients’ privacy and the disclosure of confidential information, the analysts do not watch live video streams.

“The analyst is looking at a blank screen,” Alaimo said. ”The only time an image is seen on that screen is when a gun has been identified as either a true analysis or a false positive.”

That observers have backgrounds in the military or as first responders, and their professional knowledge is essential for the ZeroEyes system to operate at maximum efficiency.

“Our goal,” Alaimo says, “and what makes the software unique, is that when a gun is exposed in front of a security camera, within three to five seconds that alert will be sent to the first responders or the client. It goes from the camera to our ZeroEyes operating center in Conshohocken to the client.

“To be that quick, we need people who are very comfortable identifying guns and very calm under pressure.”

Alaimo and his team envisioned their system as something that would be used primarily in schools. But by the time it was officially unveiled in 2020, many schools had resorted to virtual education in response to the COVID-19 pandemic.

“We had a lot of initial interest,” Alaimo recalls. “Then COVID-19 happened.”

To keep the company afloat, the system was marketed to governmental and commercial entities as well as educational institutions.

Today, Alaimo says ZeroEyes technology has been introduced in 25 states, with clients from the education sector in 19 of them. And the company’s reach is expanding.

“By the end of the year we should be in well over 40 (states),” he says.

In the wake of school shootings of recent years, Alaimo says the company is receiving an increasing number of inquiries.

“The interest has grown exponentially,” he said. “It seems that the COVID-19 issue, the lockdowns, the mental health (issues), resulting from that is causing an increase in shootings. I’m not sure what the cause is but the timeline is the last two years is getting much worse.”

And the company recently announced a partnership with robotics company Asylon to expand into drones. It will integrate ZeroEyes’ human-verified A.I. gun detection software with Asylon’s aerial drone and robot dog video feeds, providing end users with an autonomous response capability in active shooter situations.

“Our grandparents and parents had nuclear attack drills from foreign threats, and we adults had fire drills growing up. Our children today have active shooter drills–things aren’t heading in the right direction,” said Mike Lahiff, CEO of ZeroEyes. “Enabling our A.I. gun detection technology for use beyond static video cameras is a huge step in combating the mass shooting epidemic that our country faces. Our partnership with Asylon Robotics means we’re able to outfit unmanned vehicles with real-time gun detection intel and tools such as lights and audible alarms to distract shooters, giving first responders time to respond to threats more quickly and safely from the air or on the ground, when every second counts.”

Alaimo emphasizes he and his company are not taking a position on the thorny issue of gun control.

“We created a solution for right now,” he said. “That solution is to detect guns and stop the next mass shooting.”


Please follow DVJournal on social media: Twitter@DVJournal or