Clear communication between with patients and providers continues to be a barrier to healthcare access whether it’s timing, language barriers, getting past complicated medical lingo or understanding patient intent. VSee telemedicine platform provider and CloudMinds advanced Cloud AI engines are partnering to overcome these challenges with a next generation healthcare solution that melds together the benefits of telemedicine and Cloud AI for improved health delivery and outcomes.
Showcasing Telemedicine + Cloud AI Solution at ATA18
You can get a demo of the VSee-CloudMinds solution in VSee booth #2510 at the American Telemedicine Association (ATA) Annual Conference, April 29 – May 1 in Chicago. Free ATA18 Expo-only passes are available here (click “Attendee–>Non-member” and fill out the registration form).
In addition, CloudMinds Director of AI & Robotics Applications, Charles R. Jankowski, PhD will be speaking along with Optum, Sentara, Dell and Logitech with insights on making telehealth work. See here for the full VSee booth Speakers + Pitch competition schedule.
VSee’s robust telehealth platform offers a full range of tools for simple telemedicine consults and
chronic care management:
- Patient portals: customizable intake forms, self-set walk-in and/or scheduled visits, and consumer device integration (e.g. blood pressure cuff, health trackers, glucose meters)
- Provider dashboards: patient queue notification, health data visualizations, visit notes
- Practice management: change provider profiles & availability, call analytics, schedule patients
- Reliable HIPAA video chat with live whiteboard, secure messaging & file-sending
Cloud AI Capabilities
CloudMinds brings to the solution the best possible recognition, understanding, and conversation performance through a combination of high-performance 3rd party APIs and in-house functionality.
– Speech-to-Text (STT) provides both streaming (WebSockets) and non-streaming (via REST) support, and multiple languages.
– Speaker ID/ verification identifies a person from his/her speech
– Language ID automatically detects the origin of the spoken language
– Text-to-speech with various languages and voices
– Intent classification: Identifies what a patient wants when a request is spoken
– Slot filling: extracts critical information from a natural language request
– Sentiment analysis: analyzes user sentiment based on the text of a message or messages
– Dialog management supports natural sounding dialog via both state based dialog management and deep learning based approaches
– Knowledge management supports quality assurance by assembling knowledge from unstructured data.
– Chitchat: supports realistic chit chat conversation between patient and provider.
See the full press release here.