Call for Workshop Papers:

Drone Systems: Cybersecurity and Threat Intelligence

Submit Paper

Take advantage of 50% discount on the second paper!

Important dates

Submission deadline

Notification deadline

Camera-ready deadline

Conference dates

Social engineering remains one of the most effective attack vectors against modern socio-technical systems, yet it is still poorly integrated into formal security engineering practice. This workshop advances a unifying perspective that treats trust and deception as first-class security primitives—measurable, attackable, and engineerable system properties across human, organizational, and AI-mediated environments.

The workshop brings together researchers and practitioners working on adversarial modeling, trust measurement, AI-mediated influence, detection systems, and human-in-the-loop defenses. Participants will examine how social engineering threats can be rigorously modeled, empirically evaluated, and mitigated using computational, organizational, and policy-aware approaches.

Through paper presentations and structured breakout discussions, the workshop aims to (1) consolidate fragmented research threads into a coherent socio-technical security framework, (2) identify shared metrics and evaluation challenges, and (3) define a research roadmap for defensible, trustworthy systems in high-consequence domains.

  • Trust, belief, authority, signaling, and deception as security-relevant system properties
  • Adversarial threat modeling for social engineering attacks
  • Measurement, metrics, and empirical evaluation of trust and susceptibility
  • AI-mediated social engineering (LLMs, autonomous agents, synthetic personas)
  • Detection and mitigation systems for influence and deception
  • Human-in-the-loop and organizational defenses
  • Simulation, agent-based models, and adversarial games
  • Economic, strategic, and institutional dimensions of trust failure
  • Policy, governance, and ethics grounded in technical threat models

All registered workshop papers will be part of ICDF2C proceedings. Proceedings will be submitted for inclusion in leading indexing services, such as Web of Science, Compendex, Scopus, DBLP, EU Digital Library, IO-Port, MatchSciNet, Inspec and Zentralblatt MATH.

Authors of selected papers will be invited to submit an extended version to:

All accepted authors are eligible to submit an extended version in a fast track of:

Authors have the opportunity to publish their articles in the EAI Endorsed Transactions journal selected by the conference (Scopus, Ei-indexed, ESCI-WoS, Compendex) by paying an additional $250, discounted from the standard $400 rate for conference authors.

The article’s publication is subject to the following requirements:

  • It must be an extended version of the conference paper with a different title and abstract. In general, 30% of new content must be added.
  • The article will be processed once the conference proceedings have been published.
  • The article will be processed using the fast-track option.
  • Once the conference proceedings are published, the corresponding author should contact us at [email protected] with the details of their article to begin processing.

The article’s publication is subject to the following requirements:
– It must be an extended version of the conference paper with a different title and abstract. In general, 30% of new content must be added.
– The article will be processed once the conference proceedings have been published.
– The article will be processed using the fast-track option.
– Once the conference proceedings are published, the corresponding author should contact us at [email protected] with the details of their article to begin processing.

Additional publication opportunities:

Justin Pelletier

Justin Pelletier is a Professor of Practice in Cybersecurity at Rochester Institute of Technology, with research spanning social engineering, trust modeling, adversarial simulation, and human-centered security. He has led interdisciplinary work integrating cybersecurity, organizational behavior, AI-mediated influence, and wargaming approaches to socio-technical risk. His interests focus on treating trust, deception, and legitimacy as measurable and engineerable system properties in high-consequence environments.

Sanjay Goel

Sanjay Goel is a Professor in the School of Business at the University at Albany, SUNY (UAlbany). He is also the Director of Research at the New York State Center for Information Forensics and Assurance at the University. He has worked at General Electric Global Research on engineering optimization primarily related to aircraft engine and power turbines. His research group at the University is currently engaged in cybersecurity and warfare-related projects including: investigation of computer security threats such as botnets and malware, risk analysis, security policy development and evaluation, security modeling, and development of self-organized complex systems. His self-organized system research includes traffic light coordination, nano-bio computing, and security modeling.

Franklin Zaromb

Franklin Zaromb is leads research, development, and implementation of statistical and psychometric analyses for RAMA programs, tests, surveys, and various projects. Prior to his work at the RAMA, Franklin was a senior researcher at the Center for Validity Studies at the Educational Testing Service in the United States. There, he led the development of tests and statistical and psychometric studies for the US intelligence agency’s “Sirius” program. The goal of the project was to examine biases in judgment and decision-making processes. His other projects include statistical studies on multi-year longitudinal data. Franklin completed his PhD in cognitive psychology at Washington University in St. Louis in 2010. He has conducted and published research in the areas of human learning, memory, judgment and decision-making, educational applications of cognitive science, and assessment and measurement.

A 50% discount on the second paper is available for participants registering two accepted papers, provided both papers are authored by the same individual who will also be the sole attendee.

How to Submit a Paper in Confy:
  1. Go to Confy+ website.
  2. Log in or sign up as a new user.
  3. Select your desired track.
  4. Click the ‘Submit Paper’ link within the track and follow the instructions.

Alternatively, go to the Confy+ homepage and click on “Open Conferences.”

Submission Guidelines:

  • All papers must be submitted in English. 
  • Submitted PDFs should be anonymized.

  • Previously published work cannot be submitted, nor can it be concurrently submitted to any other conference or journal. These papers will be rejected without review. 
  • Papers must follow the Springer formatting guidelines (available in the Author’s Kit section). 
  • Authors must read and agree to the Publication Ethics and Malpractice Statement.
  • As per new EU accessibility requirements, going forward, all figures, illustrations, tables, and images should have descriptive text accompanying them. Please refer to the document below, which will assist you in crafting Alternative Text (Alt Text)

HOW TO WRITE GOOD ALT TEXT

Full information: https://www.springernature.com/gp/policies/book-publishing-policies

AI Authorship Policy

Large Language Models (LLMs), such as ChatGPT, do not currently satisfy our authorship criteria. Notably an attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs. We thus ask that the use of an LLM be properly documented in the Acknowledgements, or in the Introduction or Preface of the manuscript.

The use of an LLM (or other AI-tool) for “AI assisted copy editing” purposes does not need to be declared. In this context, we define the term “AI assisted copy editing” as AI-assisted improvements to human-generated texts for readability and style, and to ensure that the texts are free of errors in grammar, spelling, punctuation and tone. These AI-assisted improvements may include wording and formatting changes to the texts, but do not include generative editorial work and autonomous content creation. In all cases, there must be human accountability for the final version of the text and agreement from the authors that the edits reflect their original work. This reflects a similar stance taken on the AI generative figures policy, where it was acknowledged that there are cases where AI can be used to generate a figure without being concerned about copyright e.g. to generate a graph based on data provided by the author. 

AI Authorship Guidance

Authors should familiarise themselves with the current known risks of using AI models before using them in their manuscript. AI models have been known to plagiarise content and to create false content. As such, authors should carry out due diligence to ensure that any AI-generated content in their book is correct, appropriately referenced, and follow the standards as laid out in our Book Authors’ Code of Conduct.

AI-generated Images Policy

The fast-moving area of generative AI image creation has resulted in novel legal copyright and research integrity issues. As publishers, we strictly follow existing copyright law and best practices regarding publication ethics. While legal issues relating to AI-generated images and videos remain broadly unresolved, Springer Nature journals and books are unable to permit its use for publication.

Exceptions:

  • Images/art obtained from agencies that we have contractual relationships with that have created images in a legally acceptable manner.
  • Images and videos that are directly referenced in a piece that is specifically about AI and such cases will be reviewed on a case-by-case basis.
  • The use of generative AI tools developed with specific sets of underlying scientific data that can be attributed, checked and verified for accuracy, provided that ethics, copyright and terms of use restrictions are adhered to.

* All exceptions must be labelled clearly as generated by AI within the image field.
As we expect things to develop rapidly in this field in the near future, we will review this policy regularly and adapt if necessary.Note: Examples of image types covered by this policy include: video and animation, including video stills; photography; illustration such as scientific diagrams, photo-illustrations and other collages, and editorial illustrations such as drawings, cartoons or other 2D or 3D visual representations. Not included in this policy are text-based and numerical display items, such as: tables, flow charts and other simple graphs that do not contain images. Please note that not all AI tools are generative. The use of non-generative machine learning tools to manipulate, combine or enhance existing images or figures should be disclosed in the relevant caption upon submission to allow a case-by-case review.

AI-generated Images Guidance

For more information on the inclusion of third party content (i.e. any work that you have not created yourself and which you have reproduced or adapted from other sources) please see Rights, Permissions, Third Party Distribution.

Papers should be submitted through EAI ‘Confy+‘ system, and have to comply with the Springer format (see Author’s kit section).

  • Regular papers should be up to 12-20 pages in length.
  • Short papers should be 6-11 pages in length.

All conference papers undergo a thorough peer review process prior to the final decision and publication. This process is facilitated by experts in the Technical Program Committee during a dedicated conference period. Standard peer review is enhanced by EAI Community Review which allows EAI members to bid to review specific papers. All review assignments are ultimately decided by the responsible Technical Program Committee Members while the Technical Program Committee Chair is responsible for the final acceptance selection. You can learn more about Community Review here.

Papers must be formatted using the Springer LNICST Authors’ Kit. Instructions and templates are available from Springer’s LNICST homepage: Please make sure that your paper adheres to the format as specified in the instructions and templates. When uploading the camera-ready copy of your paper, please be sure to upload both:
  • a PDF copy of your paper formatted according to the above templates, and
  • an archive in .ZIP file, containing LaTeX or Word source material prepared according to the above guidelines.

Workshop Chairs

Justin Pelletier is a Professor of Practice in Cybersecurity at Rochester Institute of Technology, with research spanning social engineering, trust modeling, adversarial simulation, and human-centered security. He has led interdisciplinary work integrating cybersecurity, organizational behavior, AI-mediated influence, and wargaming approaches to socio-technical risk. His interests focus on treating trust, deception, and legitimacy as measurable and engineerable system properties in high-consequence environments.

Sanjay Goel is a Professor in the School of Business at the University at Albany, SUNY (UAlbany). He is also the Director of Research at the New York State Center for Information Forensics and Assurance at the University. He has worked at General Electric Global Research on engineering optimization primarily related to aircraft engine and power turbines. His research group at the University is currently engaged in cybersecurity and warfare-related projects including: investigation of computer security threats such as botnets and malware, risk analysis, security policy development and evaluation, security modeling, and development of self-organized complex systems. His self-organized system research includes traffic light coordination, nano-bio computing, and security modeling.

Franklin Zaromb is leads research, development, and implementation of statistical and psychometric analyses for RAMA programs, tests, surveys, and various projects. Prior to his work at the RAMA, Franklin was a senior researcher at the Center for Validity Studies at the Educational Testing Service in the United States. There, he led the development of tests and statistical and psychometric studies for the US intelligence agency’s “Sirius” program. The goal of the project was to examine biases in judgment and decision-making processes. His other projects include statistical studies on multi-year longitudinal data. Franklin completed his PhD in cognitive psychology at Washington University in St. Louis in 2010. He has conducted and published research in the areas of human learning, memory, judgment and decision-making, educational applications of cognitive science, and assessment and measurement.

Workshop Overview
An intensive 4-hour program exploring critical intersections of autonomous drone technologies, cybersecurity vulnerabilities, and strategic threat intelligence. Designed for cybersecurity professionals, intelligence analysts, and technology researchers seeking advanced insights into drone system exploitation and defensive strategies.
 
Instructor: Larry Leibrock brings unique expertise as a former US lethal personality targeting specialist with extensive drone targeting operations experience from Iraq and Afghanistan, combined with advanced digital forensics and field intelligence capabilities.
Workshop Structure
Duration: 4 hours (2x 2-hour sessions)  
Delivery: Interactive lectures, hands-on exercises, collaborative analysis  
Platform: Practical demonstrations using DJI Mini 4K drone systems
Session Breakdown
Session 1: Drone System Vulnerabilities and Digital Forensics
(2 hours)
 
Part A: Threat Landscape and Attack Surfaces (50 minutes)
 
– Autonomous drone technological architectures and communication protocols
– Vulnerability identification and attack surface mapping across hardware, software, and RF domains
– Threat modeling methodologies specific to UAS platforms
– Real-world exploitation scenarios from operational targeting experience
 
Part B: Digital Forensics and Evidence Collection (50 minutes)
 
– Forensic artifact analysis and data recovery from drone systems
– Evidence collection techniques and chain of custody considerations
– Exploit demonstration and reverse-engineering methodologies
 
Hands-on Exercise (20 minutes)
 
– Team-based vulnerability mapping workshop using DJI Mini 4K platform
– Identification of exploitable weaknesses in commercial drone systems
– Brief team presentations and instructor feedback
 
Session 2: Strategic Defense and Operational Intelligence
(2 hours)
 
Part A: Threat Intelligence and Adversary Capabilities (50 minutes)
 
– Adversary TTP analysis based on field intelligence experience
– Case studies from Iraq/Afghanistan operations (appropriately sanitized)
– Intelligence cycle application to drone threat assessment
– Critical infrastructure protection scenarios
 
Part B: Defensive Strategies and Counter-UAS (50 minutes)
 
– Counter-UAS (C-UAS) technologies and limitations
– Defensive framework development for layered security
– Regulatory landscape and compliance (EU/Netherlands focus)
– Emerging threats and future research directions
 
Hands-on Exercise (20 minutes)
 
– Red team/blue team scenario: Teams alternate developing attack vectors and defensive responses
– Integration of technical vulnerabilities with strategic implications
– Group discussion on mitigation priorities and resource allocation
Value Proposition
This workshop distinguishes itself by integrating actual combat targeting operations perspective with advanced technical forensics—a rare combination in drone security training. Participants gain:
 
– Operational Context: How technical vulnerabilities translate to real-world security consequences based on combat zone experience
– Intelligence Tradecraft: Field-tested methodologies for drone system exploitation and analysis
– Technical-Strategic Integration: Bridging the gap between cybersecurity specialists and strategic analysts
– Actionable Frameworks: Practical tools applicable immediately to participants’ operational environments
 
Participant Outcomes
 
In just 4 hours, participants will develop:
– Comprehensive understanding of autonomous drone system vulnerabilities
– Practical digital forensics techniques for drone incident response
– Strategic threat assessment capabilities informed by operational intelligence
– Defensive frameworks integrating technical and procedural controls
– Cross-domain perspective connecting technical exploits to strategic implications
Special Considerations
The workshop maintains appropriate operational security while maximizing educational value. All demonstrations and case studies are unclassified and suitable for international audiences. DJI commercial platforms serve as teaching tools representing majority market deployment while allowing open discussion of vulnerability profiles and vulnerabilities.
 
This workshop focuses on the intersection of technical exploitation and strategic intelligence, drawing directly from the instructor’s unique background spanning tactical drone operations, digital forensics, and field intelligence operations.
Scroll to Top