An independent initiative focused on ethical and interpretable artificial intelligence. Not a startup (yet). A lab, a vision, a working prototype.
Modern AI systems, particularly deep neural networks, operate as black boxes. Their decision-making processes remain hidden, making it impossible to understand why they produce specific outputs.
As AI increasingly influences critical decisions in healthcare, finance, and law, the lack of transparency creates serious ethical and practical concerns. Who is responsible when an opaque system makes a harmful decision?
Upcoming legislation like the EU AI Act will require explainability for high-risk AI systems. Most current implementations are not prepared to meet these requirements, creating both compliance risks and technical challenges.
Interpretability isn't just a technical challenge—it's a design challenge. We need to build AI systems that can explain themselves in human terms, and interfaces that make those explanations accessible and actionable.
A world where AI systems are as transparent as they are powerful. Where decisions affecting human lives are auditable, understandable, and aligned with human values and intentions.
We build interpretability frameworks that integrate with existing models, implementing techniques like LIME, SHAP, and Integrated Gradients to provide insights into model decisions without sacrificing performance.
Our modular UI components can be integrated into existing dashboards, providing visual explanations of model behavior, feature importance, and decision boundaries that both technical and non-technical stakeholders can understand.
We translate cutting-edge XAI research into practical implementation guides, helping teams adopt best practices for model transparency and documentation throughout the AI development lifecycle.
We help organizations prepare for upcoming AI regulations by implementing documentation frameworks, audit trails, and explainability features that align with emerging legal requirements for algorithmic transparency.
A novel approach to visualizing attention mechanisms in transformer-based language models, making their internal processes more interpretable for researchers and practitioners.
Figure 3: Attention weights visualization showing word importance in medical diagnosis prediction
Our approach combines cutting-edge research in model interpretability with intuitive visualizations, creating a bridge between complex AI systems and human understanding.
Multi-layered visualization techniques reveal how models progressively build understanding, from low-level pattern recognition to high-level concept formation.
Interactive explanations significantly improve user trust and ability to predict model behavior, even among non-technical stakeholders.
Every AI decision is presented as a traceable sequence of events, allowing users to follow the complete path from input to output.
Explanations are available at multiple levels of detail, from high-level summaries to deep technical explorations.
Each explanation includes specific, actionable information that helps users understand and respond to the AI's decision.
Engineers and data scientists building models for high-stakes domains who need to understand, debug, and improve their models' behavior.
Professionals responsible for ensuring AI systems meet emerging legal requirements for transparency, fairness, and accountability.
Decision-makers in healthcare, finance, HR, and legal tech who need to understand and communicate how AI influences their products.
"If your model affects real humans, it should be readable by real humans."
LucidAI is early-stage but growing. The core research is complete, and we're now building out the practical tools and interfaces that make this research accessible to organizations that need it.
Initial paper on neural attention visualization published and downloaded 120+ times.
Working prototype of the explainability dashboard with timeline view and attention maps.
Collaborating with two AI research groups to test and refine the approach.
We're seeking collaborators, mentors, early adopters, and aligned missions to help bring LucidAI to the organizations and teams that need it most.
Complete beta version of the explainability toolkit with documentation.
Launch pilot programs with partner organizations in healthcare and finance.
Release open-source version of core explainability libraries and UI components.
LucidAI is more than a project—it's a commitment to ensuring that as AI becomes more powerful, it also becomes more transparent, understandable, and aligned with human values.
I started LucidAI because I believe that as AI systems become more powerful and more integrated into our lives, we need to ensure they remain understandable and aligned with human values.
At 16, I'm under no illusions about the complexity of this challenge. But I'm convinced that the work of making AI interpretable is too important to leave solely to large organizations.
If you share this vision—whether you're a researcher, engineer, designer, or organization looking to make your AI systems more transparent—I'd love to connect and explore how we might work together.
Adam Guérin
Founder, LucidAI