DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Enterprise AI Trend Report: Gain insights on ethical AI, MLOps, generative AI, large language models, and much more.

2024 Cloud survey: Share your insights on microservices, containers, K8s, CI/CD, and DevOps (+ enter a $750 raffle!) for our Trend Reports.

PostgreSQL: Learn about the open-source RDBMS' advanced capabilities, core components, common commands and functions, and general DBA tasks.

AI Automation Essentials. Check out the latest Refcard on all things AI automation, including model training, data security, and more.

Related

  • Learning AI/ML: The Hard Way
  • Search for Rail Defects (Part 3)
  • Architecting High-Performance Supercomputers for Tomorrow's Challenges
  • Matching Algorithms in Dating Apps

Trending

  • The Impact of Biometric Authentication on User Privacy and the Role of Blockchain in Preserving Secure Data
  • Spring Boot 3.2: Replace Your RestTemplate With RestClient
  • Types of Data Breaches in Today’s World
  • Building Safe AI: A Comprehensive Guide to Bias Mitigation, Inclusive Datasets, and Ethical Considerations
  1. DZone
  2. Data Engineering
  3. AI/ML
  4. Dodge Adversarial AI Attacks Before It's Too Late!

Dodge Adversarial AI Attacks Before It's Too Late!

Research depicts how AI algorithms that parse and analyze algorithms can be tricked and deceived by precisely crafted phrases. How do we combat these attacks?

By 
Nancy Rean user avatar
Nancy Rean
·
Apr. 08, 21 · Analysis
Like (3)
Save
Tweet
Share
5.6K Views

Join the DZone community and get the full member experience.

Join For Free

Introduction

In this tech-oriented world where a number of hackers and technological advancements are emerging in parallel to each other, artificial intelligence has made big strides recently in understanding languages. Contrary to this, artificial intelligence can still suffer from potentially dangerous and alarming sorts of algorithmic insight. Research depicts how AI algorithms that parse and analyze algorithms can be tricked and deceived by precisely crafted phrases. A sentence that might seem appropriate to you may have the strange ability to dodge the AI algorithm. 

It is estimated by the expert community that by the year 2040, artificial intelligence will reach the capability to perform all the intellectual functions of human beings. This might seem frightening but with the few techniques outlined in this teachable, you will radically grow your possibilities of survival when encountering artificial intelligence. 

Deceiving facial recognition features and tricking speech-recognition features is child’s play for hackers and emerging cybercriminals. Meanwhile, adversarial attacks invite more conceptual and deeper speculation. Fooling images can scramble the minds of AI systems in an unexpected manner; systems developed independently by Facebook, Mobiliye, or Google expose weaknesses that are apparently ending the concept of contemporary AI as a whole. 

Hence fool-proofing AI algorithms and enhancing their security is becoming obligatory with every passing day. Let’s have a look at some of the evolving adversarial AI attacks to combat them for a secure future.

Eye-Opening Reality 

Google came up with the system in 2011 that was able to recognize cats from youtube videos and soon after that, the emergence of DNN based classical systems occurs. At the University of Wyoming in Laramie, Jeff Clune, who is a senior research manager at Uber AI labs in California, feels amazed at the fact that artificial intelligence-enabled humans can recognize this world. 

But AI researchers were familiar with the reality that computers are not actually able to recognize this world. They are software structures loosely modeled on the architecture of the brain, developed using a massive amount of digital neurons arranged in numerous layers. Interconnectedness among neurons exists among layers that exist above and below it. 

The main plan is features of row inputs that come to the bottom layer triggers a few of the neurons which then pass signals to neurons following simple mathematical rules. Exposure to a massive collection of examples is required during the training of the DNN network. Neurons are connected in such a way that they provide the expected answer for example always interpreting the picture of a cat as a cat even if the DNN has not seen that image before. 

In 2013, the first biggest reality check came across when Christian Szegedy, who is a Google researcher, along with his colleagues posted a preprint which is known as “Intriguing properties of neural networks”. The team demonstrated that it is possible to take an image that can be identified by DNN and modify a few pixels to persuade the machine that it is looking at something different. The team claimed the manipulated pictures were "adversarial." 

Typographic Attack 

Let us start with an example. If we write the word “iPod” on a sticky label and paste that label over the apple, the clip does something which is not normal. It admits that it is looking at mid-00s pieces of consumer electronics. In some other test, pasting a dollar sign on a dog makes the clip consider it a piggy bank. The machine learning research organization that created the clip, OpenAI, claims this deficiency as a "typographic attack." The organization also discovered that the highest layer of Clip organizes images as a loose semantic collection of plans. AI algorithms consider this world in terms of concepts and ideas just like human brains, instead of purely visual structures. 

The organization claimed in a paper which is published recently that, 

“By deceiving the model’s capability to read the text effortlessly, we conclude that even the photographs of handwritten text can dodge the model. This attack can be carried out effortlessly only even by the utilization of a pen and a paper.”

Robots Can Extemporize

Berkeley is a robot arm that can scrabble around through clutter in a laboratory at the University of California. It plucks the red bowl and pokes the blue oven glove situated a few centimeters away. After that, it drops the bowl and plucks the vacant plastic spray bottle.  Afterward, it explores the shape and heft of the paperback book. After the non-stop experimentation over several days, the robot started to infer these as alien objects. 

Deep learning algorithms are utilized in this sort of robot to incorporate self-learning mechanisms. For instance, if a researcher gives a goal to a robot, such as providing it with a picture of a nearly empty tray and specifying that the robot arranges the objects on that tray, the robot must first understand what objects it is working with.

Chelsea Finn who assisted at Berkeley lab and presently resuming the research at Stanford University in California claims that, 

“The generality to what it can achieve is impressing me continuously in comparison to other machine learning techniques.”

How All These Can Be Refrained To Make AI Better and Secure

The above-discussed points are comprehensive but that’s not all. Considering how this world is accommodated with artificial intelligence, algorithms require more security enhancements to combat cyber attacks and fraudulent activities. We might be stepping back from the presently open-sourced networks and technologies in lieu of being more restrictive; secure algorithms will be utilized only in a high-security environment.  Moreover, deep learning algorithms need to be enhanced explicitly for fraud detection, probably by a savage force approach or any other method. 

GAN’s are prone to more advancement at generating fake images that can trick humans and these will require special attention. Enhanced AI algorithms can be utilized for the detection of fake images or videos even better than humans. Also, additional inputs can be encountered which are difficult to imitate; for example, an airport security scanner might consider facial features, height, gait, and iris scans to become more reliable and foolproof. 

Hence much more advancement can be implemented in innovative AI algorithms to stay ahead of uncertainty. Analysts in the US military and academia are operating to fix what they claim to be "adversarial artificial intelligence." No doubt fraud and risks will still occur, but synergizing AI algorithms can efficiently solve the issue of tricking AI algorithms and for the enhancement of cybersecurity. Adversarial machine learning is good only when it comes to testing the robustness of AI systems for the generation and evaluation of test data. 

AI Machine learning Algorithm neural network Clip (compiler) Deep learning security IT DNN (software)

Opinions expressed by DZone contributors are their own.

Related

  • Learning AI/ML: The Hard Way
  • Search for Rail Defects (Part 3)
  • Architecting High-Performance Supercomputers for Tomorrow's Challenges
  • Matching Algorithms in Dating Apps

Partner Resources


Comments

ABOUT US

  • About DZone
  • Send feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: