WearableIntelligenceSystem
interpret
WearableIntelligenceSystem | interpret | |
---|---|---|
8 | 6 | |
103 | 6,007 | |
2.9% | 0.6% | |
3.1 | 9.7 | |
9 months ago | 8 days ago | |
C++ | C++ | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
WearableIntelligenceSystem
-
are you excited for premium mixed reality yet? how do you think QC is going to enable <10ms passthrough latency? quest 2 latency is ~50ms
This architecture is pretty much what we've built out as a smart glasses software framework here: https://github.com/emexlabs/WearableIntelligenceSystem
-
Smart Glasses Apps - Visual Search, Language Translation, Memory Tools - WIS Beta Release
Smart glasses are a novel interface in a few key ways. Most obviously, they bring the screen directly in front of the user, so they are able to access a visual information stream at all times. Not as commonly thought-about, but perhaps more important, is the access to always on point-of-view (POV) sensory data. This allows the machine to "see what you see" and "hear what you hear". The video above is a quick display of a few basic features that smart glasses can provide. We'd love to know where you see this technology going? Of course spatial computing and AR is cool and hype now, but what use cases do you think you'll use in your everday life? What could enhance your intelligence and capabilities by taking advantage of this new form factor? We're trying to answer these questions, and building a system that makes it easy for developers to build their own smart glasses applications: https://github.com/emexlabs/WearableIntelligenceSystem
-
GOOGLE teases augmented reality glasses with realtime translation
Also, the Open Source smart glasses software stack Wearable Intelligence System has this transcription/translation built-in: https://github.com/emexlabs/WearableIntelligenceSystem
- Smart Glasses Memory Tools - Wearable Intelligence System
-
AR translation custom programming?
Here's the open source system: https://github.com/emexlabs/WearableIntelligenceSystem
-
Glasses for ADHD
Soon there will be! I am working with a team right now to develop an app for smart glasses that I am designing in a way that will help me with my ADHD. Here's the project.
- GitHub - emexlabs/WearableIntelligenceSystem: 1. A software framework to serve as the backend for a number of Wearable Computing use cases. 2. Tools to upgrade human intelligence. Conversational intelligence, social intelligence, memory, knowledge, and thinking tools running on AR glasses.
-
Wearable Intelligence System - Smart Glasses Research Demo
We're trying to answer these questions, and building a system that makes it easy for developers to build their own smart glasses applications: https://github.com/emexlabs/WearableIntelligenceSystem
interpret
-
[D] Alternatives to the shap explainability package
Maybe InterpretML? It's developed and maintained by Microsoft Research and consolidates a lot of different explainability methods.
-
What Are the Most Important Statistical Ideas of the Past 50 Years?
You may also find Explainable Boosting Machines interesting: https://github.com/interpretml/interpret
They're a bit like a best of both worlds between linear models and random forests (generalized additive models fit with boosted decision trees)
Disclosure: I helped build this open source package
-
[N] Google confirms DeepMind Health Streams project has been killed off
Microsoft Explainable Boosting Machine (which is a Gaussian Additive Model and not a Gradient Boosted Trees ๐ model) is a step in that direction https://github.com/interpretml/interpret
-
[Discussion] XGBoost is the way.
Also I'd recommend everyone who works with xgboost to give EBM's a try! They perform comparably (except in the case of extreme interactions) but are actually interpretable! https://github.com/interpretml/interpret/ Beside that they since on runtime they're practically a lookup table they're very quick (at the cost of longer training time).
-
[D] Generalized Additive Modelsโฆ with trees?
Open source code by Microsoft: https://github.com/interpretml/interpret (called EBM in this implementation).
-
Machine Learning with Medical Data (unbalanced dataset)
If it's not an image, have a go at Microsoft's Explainable Boosting Maching) https://github.com/interpretml/interpret which is not a GBM but a GAM (Gradient Boosting Machine vs Gradient Additive Model). This will also give you explanation via SHAP or LIME values.
What are some alternatives?
spot_mini_mini - Dynamics and Domain Randomized Gait Modulation with Bezier Curves for Sim-to-Real Legged Locomotion.
shap - A game theoretic approach to explain the output of any machine learning model.
cyberWatch - simple OS for LillyGO T-Watch V3
shapash - ๐ Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
ultimateMRZ-SDK - Machine-readable zone/travel document (MRZ / MRTD) detector and recognizer using deep learning
alibi - Algorithms for explaining machine learning models
Convoscope - AI tools to augment conversations on smart glasses, wearables, laptops, and smart meeting rooms.
imodels - Interpretable ML package ๐ for concise, transparent, and accurate predictive modeling (sklearn-compatible).
liboai - A C++17 library to access the entire OpenAI API.
medspacy - Library for clinical NLP with spaCy.
webots - Webots Robot Simulator
decision-tree-classifier - Decision Tree Classifier and Boosted Random Forest