Abstract
We address the problem of learning the legitimacy of other agents in a multiagent network when an unknown subset is comprised of malicious actors. We specifically derive results for the case of directed graphs and where stochastic side information, or observations of trust, is available. We refer to this as “learning trust” since agents must identify which neighbors in the network are reliable, and we derive a learning protocol to achieve this. We also provide analytical results showing that under this protocol i) agents can learn the legitimacy of all other agents almost surely, and ii) the opinions of the agents converge in mean to the true legitimacy of all other agents in the network. Lastly, we provide numerical studies showing that our convergence results hold for various network topologies and variations in the number of malicious agents.
Original language | English (US) |
---|---|
Pages (from-to) | 142-154 |
Number of pages | 13 |
Journal | Proceedings of Machine Learning Research |
Volume | 211 |
State | Published - 2023 |
Externally published | Yes |
Event | 5th Annual Conference on Learning for Dynamics and Control, L4DC 2023 - Philadelphia, United States Duration: Jun 15 2023 → Jun 16 2023 |
Keywords
- Multiagent systems
- adversarial learning
- directed graphs
- networked systems
ASJC Scopus subject areas
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability