TY - CHAP
T1 - LNN
T2 - Logical Neural Networks
AU - Shakarian, Paulo
AU - Baral, Chitta
AU - Simari, Gerardo I.
AU - Xi, Bowen
AU - Pokala, Lahari
N1 - Publisher Copyright: © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2023
Y1 - 2023
N2 - Logical Neural Networks (LNN) is a framework that assumes knowledge of a logic program a-priori and uses gradient descent to fit the logic program to training data via parameterized logical operators, resulting in fuzzy logic semantics. The framework has several desirable properties, namely the ability to support open world reasoning, omnidirectional inference, and explainability. While consistency cannot be guaranteed, LNN’s use an additional term in the loss function to minimize inconsistencies in their approach. In this chapter, we review the foundations of LNN’s and discuss the architectural decisions that make LNN’s comparable and different from other neuro symbolic approaches.
AB - Logical Neural Networks (LNN) is a framework that assumes knowledge of a logic program a-priori and uses gradient descent to fit the logic program to training data via parameterized logical operators, resulting in fuzzy logic semantics. The framework has several desirable properties, namely the ability to support open world reasoning, omnidirectional inference, and explainability. While consistency cannot be guaranteed, LNN’s use an additional term in the loss function to minimize inconsistencies in their approach. In this chapter, we review the foundations of LNN’s and discuss the architectural decisions that make LNN’s comparable and different from other neuro symbolic approaches.
UR - http://www.scopus.com/inward/record.url?scp=85172417314&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85172417314&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-39179-8_6
DO - 10.1007/978-3-031-39179-8_6
M3 - Chapter
T3 - SpringerBriefs in Computer Science
SP - 53
EP - 61
BT - SpringerBriefs in Computer Science
PB - Springer
ER -