inside sources print logo
Get up to date Delaware Valley news in your inbox

PENNYCUICK: Protecting Our Kids—Addressing the Dangers of AI-Generated Deepfakes in PA

In August, Lancaster County police launched an investigation into a disturbing case involving 20 high school female students. The perpetrator took these teenage girls’ real pictures and used artificial intelligence (AI) technology to generate nude “deepfake” images and distributed them on the internet. Despite the clear harm caused, the district attorney pointed out a critical problem: a gray area exists in the law that prevents charges from being filed in cases like these.

This incident is far from unique. We are witnessing a troubling rise in AI-generated sexual images of both minors and non-consenting adults. This technology can be used to create photos and videos to depict individuals in explicit scenarios that never occurred with astonishing and nearly indistinguishable accuracy.

Unfortunately, these deepfake images are not explicitly covered by existing state laws, including our child sexual abuse statutes.

Currently, for example, it is not illegal for a friend, colleague, or even a stranger to take photos from someone’s public social media profile, use AI to create explicit content, and then distribute the “deepfakes” online. Shockingly, some websites have even published realistic AI-generated sexual images of children.

As AI technology advances, it offers significant benefits to our daily lives, from healthcare innovations to improving transportation and business operations. But with this progress comes serious risks and unintended consequences. The National Institute of Standards and Technology has already called for federal standards to address the potential misuse of AI. However, Congress has yet to fully address the dangers posed by AI-generated content.

Here in Pennsylvania, as chair of the Senate Communication and Technology Committee, I introduced Senate Bill 1213 to address the alarming rise of AI-generated deepfake sexual images of children and non-consenting adults. Although current state law prohibits the distribution of intimate images without consent, it does not clearly address the use of AI deepfake technology. This loophole leaves many Pennsylvanians vulnerable to a new form of digital abuse, as seen in the recent case in Lancaster County.

The bill also explicitly prohibits the use of AI to generate child sexual abuse material—previously referred to as “child pornography.” With the changes contained in SB 1213, law enforcement will now have the ability to prosecute individuals who generate and disseminate these types of child sexual abuse materials.

Last week, the Pennsylvania legislature (or Pennsylvania General Assembly) passed Senate Bill 1213. For the first time in Pennsylvania’s history, legislation will be presented to the governor to combat the prevalent and highly disturbing “deepfake” images of minors and child pornography generated by artificial intelligence.

This bipartisan effort has garnered widespread support, including from the Pennsylvania Attorney General and district attorneys throughout the commonwealth. We anticipate the governor will sign this critical legislation into law soon.

AI technology has incredible potential for good, but it can also be exploited. Pennsylvania needs strong laws to protect its citizens from those who use this technology to generate sexual images without consent, particularly child sexual abuse materials. With the passage of SB 1213, we are sending a clear message: the insidious use of AI to harm others will not be tolerated in our state.

And most importantly, innocent victims, like the high school girls in Lancaster County, will be able to seek justice.

Three Mile Island Returns! Nuke Reactor To Power Microsoft’s AI Plans

When Exelon Generation shut down the Three Mile Island (TMI) Nuclear Reactor 1 in 2019, officials said it was too costly to operate. A $500 million Pennsylvania taxpayer-funded bailout proposal went nowhere in the state legislature. Plans were made to eventually tear it down.

Five years later, TMI is back in a big way thanks to Microsoft and artificial intelligence.

Microsoft just signed a 20-year agreement with Constellation Energy – which took over TMI in 2022 – worth $16 billion. It’s expected to add 3,400 jobs to Pennsylvania and bring in more than $3 billion in state and federal taxes.

“This agreement is a major milestone in Microsoft’s efforts to help decarbonize the grid in support of our commitment to become carbon negative,” said Bobby Hollis, VP of Energy, Microsoft.

More importantly, it will add 835 megawatts (MW) of carbon-free energy to the PJM grid. PJM Interconnection operates the regional power grid that serves states across the Midwest and mid-Atlantic region, including Pennsylvania.

PJM’s service area “is about 65 million people and about 21 percent of the U.S. economy. It’s the largest electricity market in the entire world,” Ken Zapinski with Pittsburgh Works Together, a coalition of business and utility executives and union leaders, told DVJournal in a podcast interview.

The facility is expected to be online by 2028.

Constellation CEO Joe Dominguez called TMI one of the “most reliable nuclear plants on the grid” and was happy it would be used again. He said it would become a new economic engine for Pennsylvania and the PJM grid.

For a certain generation of Pennsylvanians, it’s hard to hear the words “safe” and “Three Mile Island” in the same sentence. On March 28, 1979, a cooling malfunction at the Unit 2 reactor caused part of the core to melt, and led to the release of radioactive gases and iodine into the atmosphere. It’s considered the worst nuclear power accident in U.S. history.

While studies have shown its effect on residents and the environment was minimal, the effect on public opinion was enormous. The partial meltdown caused Americans to become wary of nuclear power.

The TMI Nuclear Reactor 1 wasn’t affected by the 1979 meltdown.

The federal Nuclear Regulatory Commission will have to approve restarting TMI, and Constellation will need to get the appropriate permits from Pennsylvania.

Gov. Josh Shapiro promised his administration would keep a watchful eye to make sure everything is safe. He said nuclear power will make the grid more reliable and deliver affordable power to the Keystone State.

State Sen. Lynda Schlegel Culver (R-Northumberland) said TMI’s return strengthens Pennsylvania’s status as an energy exporter. Pennsylvania is the second-largest net supplier to energy to other states behind only Texas, according to the federal Energy Information Administration. It’s the second-largest generator of nuclear energy.

Energy executives and advocates have called for the U.S. to tap into the nuclear sector for years. They put the blame on the federal government’s decision to subsidize wind and solar energy, distorting markets and undercutting nuclear power prices.

“There were people screaming, ‘If you’ve set up a system where you want more carbon-free electricity and an existing source, a huge existing source of carbon-free electricity is shutting down because it’s unprofitable, there’s a problem with your regulatory system,’” Zapinski said.

TMI has the opportunity to be a godsend for PJM. Officials said this year they had enough energy to last through the spring of 2026, but they faced an uncertain supply after that. Critics blame the Biden-Harris administration’s emissions policies that forces more coal and natural gas power plants off the grid.

An estimated 24,000 to 58,000 MW of energy will be retired by 2030 without being replaced, due in large part to regulations pushed by the Biden-Harris EPA. At the same time, energy demand has increased by 30 percent – per the North American Electric Reliability Council (NERC). Both NERC and PJM said wind and solar can not be counted on to be constant providers of energy because they’re weather-reliant. Nuclear energy is not.

The huge turnaround for TMI raised complaints from anti-nuclear activists. Eric Epstein with Three Mile Alert said earlier this month he expected taxpayers to eventually foot the bill because there’s not a market for nuclear. “We were told, ‘Let the marketplace decide.’ The market decided, and they decided it’s not nuclear.”

But Zapinski credited the private sector for the push for more nuclear power. He used it as an example of private businesses  realizing where the free market was headed.

“What you see are private sector companies making decisions to keep themselves solvent in a very unstable energy world,” he said.

Please follow DVJournal on social media: X@DVJournal or Facebook.com/DelawareValleyJournal

OPINION: We Must Do More to Prepare for Artificial Intelligence Advances

2024 may go down as the year Artificial Intelligence (AI) took the world by storm.

The news is exciting but also a little unsettling. In five to seven years, we read AI will be as smart as humans. In 20 years, some say, AI will be able to do anything we can do. These claims are tricky to evaluate, but what’s clear is that AI is advancing faster than most people realize.

The more we interact with AI, the quicker it evolves, learns, and develops. In the past decade, AI has moved from beating humans at Jeopardy to writing songs and tackling advanced coding. The uncanny realism of deepfake videos featuring celebrities like Tom Cruise reminds us that stealing someone’s face isn’t just a plot for Mission: Impossible anymore.

As chairs of the Pennsylvania Senate’s Communications and Technology Committee, we are committed to fostering innovation while protecting Pennsylvanians from disinformation and digital threats, including sexual exploitation. This spring, our legislative agenda includes several bills that carefully address the challenges posed by AI. Our goal is to allow government and the private sector to harness this technology’s full potential in a way that aligns with the public good.

The cornerstone of American Democracy is a citizen’s ability to make their voice heard on Election Day. As AI technology becomes more accessible, there is a growing risk that bad actors will exploit it to create deceptively realistic content that could disrupt the political process.

Already this year, a robocall with a deep-faked voice of President Joe Biden falsely told Democratic voters in the New Hampshire presidential primary not to vote. We all need accurate information to make the best and most informed decisions for our families and communities. A vote cast because of fraudulent information is a vote stolen. To safeguard our constituents and the integrity of Pennsylvania’s elections, we’ve introduced legislation to prohibit the use of AI to fraudulently misrepresent political candidates.

As parents, we were shaken by events that came to light last autumn in Westfield, N.J. The social media app Snapchat was used to circulate AI-generated nude photos of high school students as young as 14. The Westfield case and others like it helped inspire our bill tackling sexual exploitation through the nonconsensual creation of pornographic deepfake images.

In Pennsylvania, sharing intimate images of a person without consent is illegal. However, the law doesn’t clearly address the use of deepfake technology to spread similar, AI-generated images without the subject’s consent. Our legislation, a companion to state Rep. Ryan MacKenzie’s House Bill 1063, will make it clear that the use of these tools to create pornographic images without consent is illegal.

Finally, we’re developing legislation that would require a clear disclosure on all AI-generated material.  With this information, readers and viewers can make informed decisions and protect themselves from misleading content. We clearly identify ourselves at the end of this op-ed. That’s because we believe Pennsylvanians have the right to know who (or what) creates the media they consume.

Some 14 states have adopted resolutions or enacted laws related to AI technology. Pennsylvania must be ready to join them with a thoughtful, commonsense legal framework if we want to manage the growing influence and potential risks of AI in our elections, workplaces, and daily lives.

Please follow DVJournal on social media: Twitter@DVJournal or Facebook.com/DelawareValleyJournal

 

KING: Why Haven’t the Presidential Candidates Embraced or Even Mentioned AI?

Memo to presidential candidates Joe Biden and Donald Trump:

Assuming one of you will be elected president of the United States next year, many computer scientists believe you should be addressing what you think about artificial intelligence and how you plan to deal with the surge in this technology, which will break over the nation in the next president’s term.

Gentlemen, this matter is urgent, yet only a little has been heard from either of you who are seeking the highest office. President Biden did sign a first attempt at guidelines for AI, but he and Trump have been quiet on its transformative impact.

Indeed, the political class has been silent, preoccupied as it is with old and — against what will happen — irrelevant issues. Congress has been as silent as Biden and Trump. There are two congressional AI caucuses, but they have been concerned with minor issues, like AI in political advertising.

Climate change and AI stand out as game changers in the next presidential term.

On climate change, both of you have spoken: Biden has made climate change his own; Trump has dismissed it as a hoax.

The AI tsunami is rolling in, and the political class is at play, unaware that it is about to be swamped by a huge new reality: exponential change that can neither be stopped nor legislated into benignity.

Before the next presidential term is far advanced, the experts tell us that the nation’s life will be changed, perhaps upended by the surge in AI, which will reach into every aspect of how we live and work.

I have surveyed the leading experts in universities, government and AI companies and they tell me that any form of employment that uses language will be changed. Just this will be an enormous upset, reaching from journalism (where AI already has had an impact) to the law (where AI is doing routine drafting) to customer service (where AI is going to take over call centers) to fast food (where AI will take the orders).

The more one thinks about AI, the more activities come to mind that will be severely affected by its neural networks.

Canvas the departments and agencies of the government, and you will learn the transformational nature of AI. In the departments of Defense, Treasury and Homeland Security, AI is seen as a serious agent of change — even revolution.

The main thing is not to confuse AI with automation. It may resemble it, and many may take refuge in the benefits of automation, especially job creation. But AI is different. Rather than job creation, it appears, at least in its early iterations, set to do major job obliteration.

But there is good AI news, too.  And those in the political line of work can use good news, whetting the nation’s appetite with the advances that are around the corner with AI.

Many aspects of medicine will, without doubt, rush forward. Omar Hatamleh, chief adviser on artificial intelligence and innovation at NASA’s Goddard Space Flight Center, says the thing to remember is that AI is exponential, but most thinking is linear.

Hatamleh is excited by the tremendous effect AI will have on medical research. He says that a child born today can expect to live to 120 years of age. How is that for a campaign message?

A good news story in AI should be enough to make campaign managers and speechwriters ecstatic. What a story to tell; what fabulous news to attach to a candidate. Think of an inaugural address that can claim AI research is going to begin to end the scourges of cancer, Alzheimer’s, Sickle cell and Parkinson’s.

Think of your campaign. Think of how you can be the president who broke through the disease barrier and extended life. AI researchers believe this is at hand, so what is holding you back?

Many would like to write the inaugural address for a president who can say, “With the technology that I will foster and support in my administration, America will reach heights of greatness never before dreamed of and which are now at hand. A journey into a future of unparalleled greatness begins today.”

So why, oh why, have you said nothing about the convulsion — good or bad — that is about to change the nation? Here is a gift as palpable as the gift of the moonshot was for John F. Kennedy.

Where are you? Either of you?

 

Not Much ‘Intelligence’ in Biden’s AI Executive Order, Experts Say

It may be the only presidential executive order ever inspired by Tom Cruise.

Earlier this year at Camp David, President Joe Biden settled down to watch the latest “Mission Impossible” movie installment. In the film, Cruise and his IMF team face off against a rogue, sentient artificial intelligence (AI) that threatens all of humanity. The movie reportedly left the president shaken.

“If he hadn’t already been concerned about what could go wrong with AI before that movie, he saw plenty more to worry about,” said White House Deputy Chief of Staff Bruce Reed.

Reed said Biden is “impressed and alarmed” by what he has seen from AI. “He saw fake AI images of himself, of his dog… he’s seen and heard the incredible and terrifying technology of voice cloning, which can take three seconds of your voice and turn it into an entire fake conversation.”

Given that the cutting-edge communications technology when Biden was born was black-and-white television, it’s no surprise he is awed by the power of AI. His order adds additional layers of government regulation to this quickly developing, cutting-edge tech.

For example, Biden will “require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government.” It also instructs federal agencies to establish guidelines “for developing and deploying safe, secure, and trustworthy AI systems.” And companies doing that cutting-edge work that can keep the U.S. competitive with China are required to notify the federal government when they are training their models and share the results of “red-team safety tests.”

According to Reason Magazine’s science reporter Ronald Bailey, “Red-teaming is the practice of creating adversarial squads of hackers to attack AI systems with the goal of uncovering weaknesses, biases, and security flaws. As it happens, the leading AI tech companies— OpenAI, Google, Meta—have been red-teaming their models all along.”

Tech experts say that movies from the multiplex aren’t necessarily the best sources for setting public policy.

“I just don’t understand why that’s where people’s heads go,” Shane Tews, a cybersecurity Nonresident Senior Fellow at the American Enterprise Institute, said. “And I do think it really is because they don’t have a lot of examples to replace that ‘Terminator’ feeling.

“I think as we start to understand how to have a better human-machine relationship, that it’ll work better,” Tews added. “I’m not sure that people will necessarily get it, but people are kind of in a crazy moment in their heads right now.”

The White House has embraced the crazy, critics say, spreading fear about existential threats.

“When people around the world cannot discern fact from fiction because of a flood of AI-enabled mis- and disinformation, I ask, is that not existential for democracy?” said Vice President Kamala Harris during a speech at the United Kingdom’s AI Safety Summit. In her mind, a variety of issues need to be addressed to manage the growing AI industry. “We must consider and address the full spectrum of AI risk — threats to humanity as a whole, as well as threats to individuals, communities, to our institutions, and to our most vulnerable populations.”

Biden’s proposal was praised by the Electronic Frontier Foundation. The digital rights nonprofit supports regulating the use of AI in certain situations, like housing, but not the technology itself.

“AI has extraordinary potential to change lives, for good and ill,” Karen Gullo, an EFF analyst, said. “We’re glad to see the White House calling out algorithmic discrimination in housing, criminal justice, policing, the workplace, and other areas where bias in AI promotes inequity in outcomes. We’re also glad to see that the executive order calls for strengthening privacy-preserving technologies and cryptographic tools. The order is full of ‘guidance’ and ‘best practices,’ so only time will tell how it’s implemented.”

Other technology policy analysts, like Adam Thierer, panned the executive order.

“Biden’s EO is a blueprint for back door computing regulation that could stifle America’s technology base,” said Thierer, a resident senior fellow, Technology & Innovation, for R Street Institute. “It will accelerate bureaucratic micro-management of crucial technologies that we need to be promoting, not punishing, as we begin a global race with China and other nations for supremacy in next-generation information and computational capabilities.”

He has argued people shouldn’t be pessimistic about new technology or attempt to go into what he calls “technological stasis.”

And Doug Kelly, CEO of the American Edge Project, said the temptation of Washington, D.C., to overregulate could cost the U.S. economy.

“A report by PwC estimates the global economic effect of AI to be $14 trillion by 2030, with China and North America projected to be the biggest winners in terms of economic gain,” Kelly wrote for DCJournal. “But if regulators in Washington or Europe tie the hands of AI innovators, we run the risk of falling behind in AI leadership to China, which has invested hundreds of billions in an attempt to gain global AI superiority.”

AI is already used in autocorrect, autocomplete, chatbots, translation software, and programmable things like robot vacuums or automatic coffee makers. Popular sci-fi characters like C-3PO, R2D2, and JARVIS are also AI-driven.

Tews suggests people get frustrated when technology doesn’t work as it should. “I think people project that onto what happens when I get into these situations where it’s me and this machine that’s not acting appropriately – I’m not getting the net result of what I want, and I don’t want there to be more of that my life.”

KING: Artificial Intelligence — the Greatest Disruptor Ever?

To rephrase Leon Trotsky: You may not be interested in artificial intelligence, but artificial intelligence is interested in you.

Suddenly, long-rumored and awaited, AI is upon the world—a world that isn’t ready for the massive and forever disruption it threatens.

AI could be the greatest disruptor in history, surpassing the arrival of the printing press, the steam engine, and electricity. Those all led to good things.

At this time, the long-term effects of AI are just speculative, but they could be terrifying, throwing tens of millions out of work and making a mockery of truth, rendering pictures and printed words unreliable.

There is no common view on the impact of AI on employment. When I ask, the scientists working on it point to the false fears that once greeted automation. In reality, jobs swelled as new products needed new workers.

My feeling is that the job scenario has yet to be proven with AI. Automation added to work by making old work more efficient and creating things never before enjoyed, and, in the process, opening up new worlds of work.

AI, it seems to me, is all set to subtract from employment, but there is no guarantee it will create great, new avenues of work.

An odd development, spurred by AI, might be in a revival of unionism. More people might want to join a union in the hope that this will offer job security.

The endangered people are those who do less-skilled tasks, like warehouse laborers or fast-food servers. Already Wendy’s, the fast-food chain, is working to replace order-takers in the drive- through lanes with AI-operated systems, mimicking human beings.

Also threatened are those who may find AI can do much, if not all, of their work as well as they do. They include lawyers, journalists, and musicians.

Here the AI impact could, in theory, augment or replace our culture with new creations; superior symphonies than those composed by Beethoven or better country songs than those by Kris Kristofferson.

I asked the AI-powered Bing search engine a question about Adam Smith, the 18th-century Scottish economist. Back came three perfect paragraphs upon which I couldn’t improve. I was tempted to cut-and-paste them into the article I was writing. It is disturbing to find out you are superfluous.

Even AI’s creators and those who understand the technology are alarmed. In my reporting, they range from John E. Savage, An Wang professor emeritus of computer science at Brown University, to Stuart J. Russell, professor of computer science at the University of California, Berkeley, and one of the preeminent researchers and authors on AI. They both told me that scientists don’t actually know how AI works once it is working. There is general agreement that it should be regulated.

Russell, whose most recent book is “Human Compatible: Artificial Intelligence and the Problem of Control,” was one of a group of prominent leaders who signed an open letter on March 29 urging a six-month pause in AI development until more is understood—leading, perhaps, to regulation.

And there’s one rub: How do you regulate AI? Having decided how to regulate AI, how would it be policed? By its nature, AI is amorphous and ubiquitous. Who would punish the violators and how?

The public became truly aware of AI as recently as March 14 with the launch of GPT-4, the successor to GPT-3, which is the technology behind the chatbot ChatGPT. Billions of people went online to test it, including me.

The chatbot answered most of the questions I asked it more or less accurately, but often with some glaring error. It did find out about a friend of my teenage years, but she was from an aristocratic English family, so there was a paper trail for it to unearth.

Berkeley’s Russell told me that he thinks AI will make 2023 a seminal year “like 1066  [the Norman Conquest of England].”

That is another way of saying we are balanced on the knife-edge of history.

Of course, you could end AI, but you would have to get rid of electricity — hardly an option.

Conshohocken-Based ZeroEyes Watches Out for Guns in Schools

No parent can completely dismiss the possibility of their child’s school being the scene of a shooting incident. The tragic school shooting in Uvalde, Texas still reverberates.

Various theories have been espoused on how to curtail mass shootings. Sam Alaimo has put his theories into practice.

Alaimo is the founder and chief financial officer of ZeroEyes, a Conshohocken-based security firm that has developed technology designed to detect the presence of a gun, minimize the risk of a mass shooting and keep children safe using AI (artificial intelligence). The company is the only creator of the only A.I.-based gun detection video analytics platform that holds the U.S. Department of Homeland Security SAFETY Act Designation.

Alaimo, a former Navy SEAL, helped launch the company in 2018 as an effort to protect children and their teachers.

Sam Alaimo

“I met our CEO, Mike Lahiff, on a SEAL team over a decade ago,” Alaimo recalls. “He was at his daughter’s school in 2018 after the Parkland shooting and he noticed all the physical security cameras there that were not proactive. They just sat there and did nothing.”

Alaimo, Lahiff, and their colleagues spent two years developing a system that would detect guns more efficiently than existing technology while still being cost-effective. The process included a rigorous testing protocol.

“We brought in cameras,” Alaimo said. “We used different angles different lighting conditions, different guns, different hand sizes, every different variable we could think of. That ended up being the unique difference between us and the other companies.”

That is one element of the ZeroEyes system. Then there is the human element; a team of observers who monitor and analyze the information provided by the system of cameras.

To protect clients’ privacy and the disclosure of confidential information, the analysts do not watch live video streams.

“The analyst is looking at a blank screen,” Alaimo said. ”The only time an image is seen on that screen is when a gun has been identified as either a true analysis or a false positive.”

That observers have backgrounds in the military or as first responders, and their professional knowledge is essential for the ZeroEyes system to operate at maximum efficiency.

“Our goal,” Alaimo says, “and what makes the software unique, is that when a gun is exposed in front of a security camera, within three to five seconds that alert will be sent to the first responders or the client. It goes from the camera to our ZeroEyes operating center in Conshohocken to the client.

“To be that quick, we need people who are very comfortable identifying guns and very calm under pressure.”

Alaimo and his team envisioned their system as something that would be used primarily in schools. But by the time it was officially unveiled in 2020, many schools had resorted to virtual education in response to the COVID-19 pandemic.

“We had a lot of initial interest,” Alaimo recalls. “Then COVID-19 happened.”

To keep the company afloat, the system was marketed to governmental and commercial entities as well as educational institutions.

Today, Alaimo says ZeroEyes technology has been introduced in 25 states, with clients from the education sector in 19 of them. And the company’s reach is expanding.

“By the end of the year we should be in well over 40 (states),” he says.

In the wake of school shootings of recent years, Alaimo says the company is receiving an increasing number of inquiries.

“The interest has grown exponentially,” he said. “It seems that the COVID-19 issue, the lockdowns, the mental health (issues), resulting from that is causing an increase in shootings. I’m not sure what the cause is but the timeline is the last two years is getting much worse.”

And the company recently announced a partnership with robotics company Asylon to expand into drones. It will integrate ZeroEyes’ human-verified A.I. gun detection software with Asylon’s aerial drone and robot dog video feeds, providing end users with an autonomous response capability in active shooter situations.

“Our grandparents and parents had nuclear attack drills from foreign threats, and we adults had fire drills growing up. Our children today have active shooter drills–things aren’t heading in the right direction,” said Mike Lahiff, CEO of ZeroEyes. “Enabling our A.I. gun detection technology for use beyond static video cameras is a huge step in combating the mass shooting epidemic that our country faces. Our partnership with Asylon Robotics means we’re able to outfit unmanned vehicles with real-time gun detection intel and tools such as lights and audible alarms to distract shooters, giving first responders time to respond to threats more quickly and safely from the air or on the ground, when every second counts.”

Alaimo emphasizes he and his company are not taking a position on the thorny issue of gun control.

“We created a solution for right now,” he said. “That solution is to detect guns and stop the next mass shooting.”

 

Please follow DVJournal on social media: Twitter@DVJournal or Facebook.com/DelawareValleyJournal