Call for Workshop Papers:
Drone Systems: Cybersecurity and Threat Intelligence
Submit Paper
Take advantage of 50% discount on the second paper!
Notification deadline
- 30 April, 2026
Camera-ready deadline
- 15 May, 2026
Conference dates
- 8 - 10 September, 2026
- Reykjavík, Iceland
Scope
Social engineering remains one of the most effective attack vectors against modern socio-technical systems, yet it is still poorly integrated into formal security engineering practice. This workshop advances a unifying perspective that treats trust and deception as first-class security primitives—measurable, attackable, and engineerable system properties across human, organizational, and AI-mediated environments.
The workshop brings together researchers and practitioners working on adversarial modeling, trust measurement, AI-mediated influence, detection systems, and human-in-the-loop defenses. Participants will examine how social engineering threats can be rigorously modeled, empirically evaluated, and mitigated using computational, organizational, and policy-aware approaches.
Through paper presentations and structured breakout discussions, the workshop aims to (1) consolidate fragmented research threads into a coherent socio-technical security framework, (2) identify shared metrics and evaluation challenges, and (3) define a research roadmap for defensible, trustworthy systems in high-consequence domains.
Topics
- Trust, belief, authority, signaling, and deception as security-relevant system properties
- Adversarial threat modeling for social engineering attacks
- Measurement, metrics, and empirical evaluation of trust and susceptibility
- AI-mediated social engineering (LLMs, autonomous agents, synthetic personas)
- Detection and mitigation systems for influence and deception
- Human-in-the-loop and organizational defenses
- Simulation, agent-based models, and adversarial games
- Economic, strategic, and institutional dimensions of trust failure
- Policy, governance, and ethics grounded in technical threat models
Publication
All registered workshop papers will be part of ICDF2C proceedings. Proceedings will be submitted for inclusion in leading indexing services, such as Web of Science, Compendex, Scopus, DBLP, EU Digital Library, IO-Port, MatchSciNet, Inspec and Zentralblatt MATH.
Authors of selected papers will be invited to submit an extended version to:
All accepted authors are eligible to submit an extended version in a fast track of:
Authors have the opportunity to publish their articles in the EAI Endorsed Transactions journal selected by the conference (Scopus, Ei-indexed, ESCI-WoS, Compendex) by paying an additional $250, discounted from the standard $400 rate for conference authors.
The article’s publication is subject to the following requirements:
- It must be an extended version of the conference paper with a different title and abstract. In general, 30% of new content must be added.
- The article will be processed once the conference proceedings have been published.
- The article will be processed using the fast-track option.
- Once the conference proceedings are published, the corresponding author should contact us at [email protected] with the details of their article to begin processing.
The article’s publication is subject to the following requirements:
– It must be an extended version of the conference paper with a different title and abstract. In general, 30% of new content must be added.
– The article will be processed once the conference proceedings have been published.
– The article will be processed using the fast-track option.
– Once the conference proceedings are published, the corresponding author should contact us at [email protected] with the details of their article to begin processing.
Additional publication opportunities:
- EAI Transactions series (Open Access)
- EAI/Springer Innovations in Communications and Computing Book Series
(titles in this series are indexed in Ei Compendex, Web of Science & Scopus)
Workshop chairs
Justin Pelletier
Justin Pelletier is a Professor of Practice in Cybersecurity at Rochester Institute of Technology, with research spanning social engineering, trust modeling, adversarial simulation, and human-centered security. He has led interdisciplinary work integrating cybersecurity, organizational behavior, AI-mediated influence, and wargaming approaches to socio-technical risk. His interests focus on treating trust, deception, and legitimacy as measurable and engineerable system properties in high-consequence environments.
Sanjay Goel
Sanjay Goel is a Professor in the School of Business at the University at Albany, SUNY (UAlbany). He is also the Director of Research at the New York State Center for Information Forensics and Assurance at the University. He has worked at General Electric Global Research on engineering optimization primarily related to aircraft engine and power turbines. His research group at the University is currently engaged in cybersecurity and warfare-related projects including: investigation of computer security threats such as botnets and malware, risk analysis, security policy development and evaluation, security modeling, and development of self-organized complex systems. His self-organized system research includes traffic light coordination, nano-bio computing, and security modeling.
Franklin Zaromb
Franklin Zaromb is leads research, development, and implementation of statistical and psychometric analyses for RAMA programs, tests, surveys, and various projects. Prior to his work at the RAMA, Franklin was a senior researcher at the Center for Validity Studies at the Educational Testing Service in the United States. There, he led the development of tests and statistical and psychometric studies for the US intelligence agency’s “Sirius” program. The goal of the project was to examine biases in judgment and decision-making processes. His other projects include statistical studies on multi-year longitudinal data. Franklin completed his PhD in cognitive psychology at Washington University in St. Louis in 2010. He has conducted and published research in the areas of human learning, memory, judgment and decision-making, educational applications of cognitive science, and assessment and measurement.
Submission Guidelines
A 50% discount on the second paper is available for participants registering two accepted papers, provided both papers are authored by the same individual who will also be the sole attendee.
- Go to Confy+ website.
- Log in or sign up as a new user.
- Select your desired track.
- Click the ‘Submit Paper’ link within the track and follow the instructions.
Alternatively, go to the Confy+ homepage and click on “Open Conferences.”
Submission Guidelines:
- All papers must be submitted in English.
Submitted PDFs should be anonymized.
- Previously published work cannot be submitted, nor can it be concurrently submitted to any other conference or journal. These papers will be rejected without review.
- Papers must follow the Springer formatting guidelines (available in the Author’s Kit section).
- Authors must read and agree to the Publication Ethics and Malpractice Statement.
- As per new EU accessibility requirements, going forward, all figures, illustrations, tables, and images should have descriptive text accompanying them. Please refer to the document below, which will assist you in crafting Alternative Text (Alt Text)
Springer AI Policies and Guidance
Full information: https://www.springernature.com/gp/policies/book-publishing-policies
AI Authorship Policy
Large Language Models (LLMs), such as ChatGPT, do not currently satisfy our authorship criteria. Notably an attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs. We thus ask that the use of an LLM be properly documented in the Acknowledgements, or in the Introduction or Preface of the manuscript.
The use of an LLM (or other AI-tool) for “AI assisted copy editing” purposes does not need to be declared. In this context, we define the term “AI assisted copy editing” as AI-assisted improvements to human-generated texts for readability and style, and to ensure that the texts are free of errors in grammar, spelling, punctuation and tone. These AI-assisted improvements may include wording and formatting changes to the texts, but do not include generative editorial work and autonomous content creation. In all cases, there must be human accountability for the final version of the text and agreement from the authors that the edits reflect their original work. This reflects a similar stance taken on the AI generative figures policy, where it was acknowledged that there are cases where AI can be used to generate a figure without being concerned about copyright e.g. to generate a graph based on data provided by the author.
AI Authorship Guidance
Authors should familiarise themselves with the current known risks of using AI models before using them in their manuscript. AI models have been known to plagiarise content and to create false content. As such, authors should carry out due diligence to ensure that any AI-generated content in their book is correct, appropriately referenced, and follow the standards as laid out in our Book Authors’ Code of Conduct.
AI-generated Images Policy
The fast-moving area of generative AI image creation has resulted in novel legal copyright and research integrity issues. As publishers, we strictly follow existing copyright law and best practices regarding publication ethics. While legal issues relating to AI-generated images and videos remain broadly unresolved, Springer Nature journals and books are unable to permit its use for publication.
Exceptions:
- Images/art obtained from agencies that we have contractual relationships with that have created images in a legally acceptable manner.
- Images and videos that are directly referenced in a piece that is specifically about AI and such cases will be reviewed on a case-by-case basis.
- The use of generative AI tools developed with specific sets of underlying scientific data that can be attributed, checked and verified for accuracy, provided that ethics, copyright and terms of use restrictions are adhered to.
* All exceptions must be labelled clearly as generated by AI within the image field.
As we expect things to develop rapidly in this field in the near future, we will review this policy regularly and adapt if necessary.
Note: Examples of image types covered by this policy include: video and animation, including video stills; photography; illustration such as scientific diagrams, photo-illustrations and other collages, and editorial illustrations such as drawings, cartoons or other 2D or 3D visual representations. Not included in this policy are text-based and numerical display items, such as: tables, flow charts and other simple graphs that do not contain images. Please note that not all AI tools are generative. The use of non-generative machine learning tools to manipulate, combine or enhance existing images or figures should be disclosed in the relevant caption upon submission to allow a case-by-case review.
AI-generated Images Guidance
For more information on the inclusion of third party content (i.e. any work that you have not created yourself and which you have reproduced or adapted from other sources) please see Rights, Permissions, Third Party Distribution.
Paper Submission
Papers should be submitted through EAI ‘Confy+‘ system, and have to comply with the Springer format (see Author’s kit section).
- Regular papers should be up to 12-20 pages in length.
- Short papers should be 6-11 pages in length.
All conference papers undergo a thorough peer review process prior to the final decision and publication. This process is facilitated by experts in the Technical Program Committee during a dedicated conference period. Standard peer review is enhanced by EAI Community Review which allows EAI members to bid to review specific papers. All review assignments are ultimately decided by the responsible Technical Program Committee Members while the Technical Program Committee Chair is responsible for the final acceptance selection. You can learn more about Community Review here.
Author’s kit – Instructions and Templates
- a PDF copy of your paper formatted according to the above templates, and
- an archive in .ZIP file, containing LaTeX or Word source material prepared according to the above guidelines.
Workshop Chairs
Justin Pelletier
Justin Pelletier is a Professor of Practice in Cybersecurity at Rochester Institute of Technology, with research spanning social engineering, trust modeling, adversarial simulation, and human-centered security. He has led interdisciplinary work integrating cybersecurity, organizational behavior, AI-mediated influence, and wargaming approaches to socio-technical risk. His interests focus on treating trust, deception, and legitimacy as measurable and engineerable system properties in high-consequence environments.
Sanjay Goel
Sanjay Goel is a Professor in the School of Business at the University at Albany, SUNY (UAlbany). He is also the Director of Research at the New York State Center for Information Forensics and Assurance at the University. He has worked at General Electric Global Research on engineering optimization primarily related to aircraft engine and power turbines. His research group at the University is currently engaged in cybersecurity and warfare-related projects including: investigation of computer security threats such as botnets and malware, risk analysis, security policy development and evaluation, security modeling, and development of self-organized complex systems. His self-organized system research includes traffic light coordination, nano-bio computing, and security modeling.
Franklin Zaromb
Franklin Zaromb is leads research, development, and implementation of statistical and psychometric analyses for RAMA programs, tests, surveys, and various projects. Prior to his work at the RAMA, Franklin was a senior researcher at the Center for Validity Studies at the Educational Testing Service in the United States. There, he led the development of tests and statistical and psychometric studies for the US intelligence agency’s “Sirius” program. The goal of the project was to examine biases in judgment and decision-making processes. His other projects include statistical studies on multi-year longitudinal data. Franklin completed his PhD in cognitive psychology at Washington University in St. Louis in 2010. He has conducted and published research in the areas of human learning, memory, judgment and decision-making, educational applications of cognitive science, and assessment and measurement.
Workshop Overview
Workshop Structure
Session Breakdown
(2 hours)
(2 hours)
