Publications
Same Content, Different Answers: Cross-Modal Inconsistency in MLLMs
A recent CVPR paper on cross-modal inconsistency in multimodal large models, with implications for interpretability and reliability.
Leveraging Differentiable Climate-Economy Models for Hybrid Modeling and Inverse Problems
Differentiable climate-economy modeling for hybrid modeling and inverse problems in policy-relevant strategic environments.
ECSEL: Explainable Classification via Signomial Equation Learning
Equation-based explainability for classification in settings where transparency matters.
Explaining the Explainer: Understanding the Inner Workings of Transformer-Based Symbolic Regression Models
Mechanistic interpretability for transformer models that produce symbolic structure.
Detecting Fraud in Financial Networks: A Semi-Supervised GNN Approach with Granger-Causal Explanations
A finance-grounded case for graph learning with causal explanations and operational accountability.
Analyzing Probabilistic Logic Shields for Multi-Agent Reinforcement Learning
Safety-oriented neurosymbolic control for multiagent reinforcement learning.
ACTIVA: Amortized Causal Effect Estimation via Transformer-Based Variational Autoencoder
Representation learning for causal effect estimation in data-rich decision settings.
AI for Global Climate Cooperation: Modeling Global Climate Negotiations, Agreements, and Long-Term Cooperation in RICE-N
Multiagent climate cooperation and policy design in a high-stakes negotiation setting, relevant for safe and accountable strategic decision-making.
CORE: Towards Scalable and Efficient Causal Discovery with Reinforcement Learning
Causal discovery through reinforcement learning, bridging reasoning and adaptation.
CAGE: Causality-Aware Shapley Value for Global Explanations
Global explanations informed by causal structure, aimed at more faithful explanation in high-stakes settings.
Enforcing Interpretability in Time Series Transformers: A Concept Bottleneck Framework
Concept bottlenecks for making time-series transformer models more interpretable and inspectable.
Explainable Fraud Detection with Deep Symbolic Classification
Fraud detection through deep symbolic classification, emphasizing transparent model structure and explanations.
On the Potential of Network-Based Features for Fraud Detection
A study of network-derived signals for fraud detection in realistic financial settings.
Differentiable Inductive Logic Programming for Fraud Detection
Logic-based fraud detection with differentiable inductive logic programming.
Integrating Fuzzy Logic into Deep Symbolic Regression
Fuzzy-logic extensions for symbolic regression, aimed at more interpretable modeling in financially relevant settings.