Leading Security in the Age of AI: A Conversation with Air Force Veteran & CEO John Dickson

AI’s potential and its dangers for cybersecurity: Insights from Air Force Veteran John Dickson’s conversation with EC-Council Founder Jay Bavisi

As AI rapidly transforms the digital landscape, more and more organizations are waking up to not only its promise, but also its risks. This is especially true in cybersecurity, where AI generated codes can both strengthen defenses through rapid threat detection and arm cybercriminals to launch dangerous attacks.

In an exclusive interview with EC-Council Founder Jay Bavisi, Air Force Veteran and CEO of Bytewhisper Security John Dickson offers candid insights into the challenges, realities, and the future of cybersecurity.

In this episode, John shares his unique perspective shaped by decades in military and private-sector cybersecurity. He talks about his journey, the lessons he drew from military intelligence, and offers his critical views on AI’s role in modern software engineering across industries. In a world where human and AI-generated code now coexist, John underscores the need for stringent checks and responsible software development to maintain trust and security in organizations and the digital community.

A career in cybersecurity is not only for tech majors; anyone from a non-tech background can explore this field

Many cybersecurity professionals, like John himself, don’t start their careers with technical degrees. What brings them to this arena is curiosity, self-learning, and adaptability.

John, who was a political science student, happened to be part of a team that worked on the early stages of the SUN SPARC station and its co-computers and laptops. This exposure enabled a non-technical professional like him to transition into technology.

Structured programs, self-learning, and curiosity are closely knit

Several cybersecurity programs and university degrees are widely available today—options that didn’t exist 30 years ago. Yet, according to John, the single most important quality remains curiosity. The more you learn, the more you want to explore.

John concludes that formal education is only the beginning; curiosity and commitment to lifelong learning are the true essentials. An additional understanding of the history and customs of cybersecurity will also prove beneficial.  

The convergence of AI in cybersecurity

Though AI has been around for years, it was the arrival of ChatGPT that truly pushed it into the spotlight, and everyone started taking notice of the power of AI.

On being asked what it was about the convergence of AI into cybersecurity that frightened him the most? He replied, “People who trust the output of it inherently right now.”

AI accelerates software creation and documentation but also introduces risks such as “hallucinations,” where it produces incorrect or fabricated outputs. Yet many people still approach AI with a high level of trust.

Overreliance on AI-generated code

John’s biggest fear is that overreliance on AI-generated code without conducting proper checks can lead to dangerous vulnerabilities.

He shared an example where he used Perplexity to write a 100-word preamble for a LinkedIn post. The platform delivered the result. But what he noticed was that the names, quotes, and other details mentioned in the post were entirely inaccurate. Somehow, the AI had reimagined the scenario and written its own version!

The importance of guardrails and human oversight

Nonetheless, when it comes to software, the real challenge with AI is surprisingly not its hallucination—those can be easily spotted. The deeper issue is AI’s nature of being non-deterministic.

In traditional software, determinism is depicted when a user enters a command and receives the same output each time he enters the same command. This predictability is known as the backbone of quality, security, and trust. But this is not the case with AI. Here, if the user enters the same prompt twice, he may receive two different responses, even though nothing in the prompt has changed.

John explains this with the example of a neural network like a pachinko machine in Japan: The user launches the ball (i.e., the prompt or input), but the path it takes and where it lands (i.e., the output) can differ greatly with each attempt. The results might look similar, but not alike.

Another example is when a user tries to generate an image or a paragraph several times using the same prompt, they begin noticing minor variations in the output. Though this doesn’t happen every time; it complicates matters when used for software development. From debugging to compliance and security auditing, everything could hit the deck.

Non-determinism demands a new level of scrutiny and oversight as unpredictability at the code level can lead to unpredictable risks. Therefore, blindly trusting AI outputs is a risk especially in cases of critical systems. Human review and rigorous vulnerability scanning remain essential.

The question of the era: Will AI replace software jobs, or will the roles change?

Claims that AI will make software developers obsolete are overhyped. As John puts it, “With ChatGPT, casual use for social media purposes is manageable. But if it is for the larger community or critical systems, it requires rigorous security checks and human intervention.”

We don’t know how software roles will evolve. Entry-level hiring is already shifting, but as cybersecurity curricula evolve over the years, skills such as critical thinking, curiosity, and adaptability are sure to remain valuable and ensure long-term relevance.

Certain job functions that involve easily automated tasks are expected to disappear, while demand for those that can solve AI and business needs will be on the rise.

Consequentiality and explainability in AI: The role of AI managers

The demand for AI managers will increase with time; they are likely to be the gatekeepers of agentic processes within organizations. Currently, people understand AI but are yet unaware of its appropriate usage. Only the gatekeepers will be equipped to solve this issue.

The importance of understanding what AI systems do and why is further magnified in high-stakes environments such as aviation and critical infrastructure.

From ambitious goals to real-world impact

A renowned hospital in Houston once set its sights on an ambitious AI project to “solve cancer.” When initial efforts weren’t successful, the team made a pivotal shift: Instead of chasing difficult-to-achieve goals, they redirected their AI resources toward a more pressing, everyday problem: Scheduling the flow of patients and resources.

By implementing AI to optimize factors like scheduling of facilities, rooms, doctors, equipment, and medication, the hospital achieved a remarkable improvement in delivering smoother and more humane care to its patients as well as their family.

This story highlights a critical lesson for organizations: The most impactful use of AI often lies not in headline-grabbing projects, but in addressing complex challenges that truly affect people’s lives.

The adoption of AI in business and enterprise environments is a double-edged sword

The potential of AI is vast, but there are also risks associated with it. Not all AI systems are created equally. When AI is deployed in high impact environments such as aviation or healthcare, it demands the highest level of transparency and human oversight. To ensure safety and trust, these systems must undergo rigorous checks and accountability.

When AI is used for regular business purposes like scheduling hospital resources, it delivers tangible benefits with manageable risk.

John warns that agentic AI, which can execute a sequence of tasks on behalf of a user, poses real dangers if not checked thoroughly. Unchecked, agentic AI processes can manipulate systems. If an AI agent is allowed to manage complex processes such as database management or system configuration without checks or regulation it can cause serious errors including accidental data deletion, if the system isn’t designed to “fail well.”

Conclusion  

As AI becomes more and more central to how we build and protect software, it needs watchdogs in the form of vigilance, adaptability, and ethical oversight. When we overhype AI, we invite blind trust and sloppy security practices. The real win is using AI as a smart co-pilot while humans fly the plane, bringing judgment, critical thinking, and detailed review.

AI is not here to make software developers extinct! As John points out, using tools like ChatGPT for casual purposes may be manageable, but in critical systems and environments like aviation and healthcare, we still need rigorous security checks and humans in the loop.

To go deeper into these issues, visit the podcast Leading Security in the Age of AI: A Conversation with Air Force Veteran & CEO John Dickson, where John not only unpacks them in much more detail but also jumps into a rapid-fire round of blunt, experience-driven takes on government cyber readiness, AI-generated code in enterprise systems, and what he’s truly willing to trust in high-impact environments.

For more conversations shaping the future of cybersecurity, subscribe to the Cybersecurity Podcast by EC-Council.

Share this Article
Facebook
Twitter
LinkedIn
WhatsApp
Pinterest
You may also like
Recent Articles
Train With EC-Council