Connect with us

Medtech

WiMi Developed Multi-Modal EEG-Based Hybrid BCI System With Visual Servo Module

WiMi Hologram Cloud Inc. a leading global Hologram Augmented Reality (“AR”) Technology provider, announced that it developed multi-modal EEG-based…

Published

on

This article was originally published by AITHORITY
WiMi Developed Multi-Modal EEG-Based Hybrid BCI System With Visual Servo Module

WiMi Hologram Cloud Inc. a leading global Hologram Augmented Reality (“AR”) Technology provider, announced that it developed multi-modal EEG-based hybrid BCI system with visual servo module which combines SSVEP and motor image signals and introduces a visual servo module to improve the robot’s performance in performing grasping tasks. By combining different types of EEG signals, users can control the robot more freely and intuitively to perform diverse actions, thus providing a more satisfying service experience.

The approach of the multi-modal EEG-based hybrid BCI system with visual servo module mainly involves the design of signal acquisition, signal processing, control command generation and visual servo module.

1. Signal Acquisition: The system first needs to acquire the user’s EEG signal and visual feedback signal. In order to realize multi-modal control, the system acquires both SSVEP and motor image signals.

SSVEP Signal Acquisition: By placing EEG electrodes on the user’s scalp, the system can acquire the user’s SSVEP signals. SSVEP is a type of flicker visual evoked potential, in which the brain generates electrical signals of a specific frequency when the user’s visual attention is focused on a specific frequency of flicker stimuli. In order to realize multi-modal control, the system provides three different frequencies of flicker stimuli on the visual interface, and each stimulus corresponds to one control command of the robot, such as forward, turn left, and turn right.

Motor Image Signal Acquisition: In addition to the SSVEP signal, the system needs to acquire the motor image signal of the user. This is achieved by acquiring the user’s motor image signals through EEG electrodes over a specific area. When the user imagines a grasping movement, the associated motor image signals are captured and used to control the robot to perform the grasping action.

Signal Processing: After signal acquisition, the raw EEG signals obtained need to be processed and analyzed in order to extract useful information and to perform feature extraction and classification to recognize the user’s intention.

Recommended AI News: Leverate Revolutionizes Broker-Trader Interactions with Chatbot Assistant

SSVEP signal processing: For SSVEP signals, the system first needs to filter and pre-process the original signal to eliminate noise and interference. Then, by extracting spectral features, it recognizes which frequency the user’s current visual attention is focused on, so as to determine whether the user’s intention is to turn forward, left or right.

Motor Image Signal Processing: For motor image signals, the system needs to pre-process the raw signals to eliminate noise and interference. Then, the user’s imagined actions, such as grasping motion, are recognized through feature extraction and classification.

In WiMi’s multi-modal EEG-based hybrid BCI system with visual servo module, the generation of control commands is the core part of the whole system. Control command generation involves parsing the recognized EEG signals and mapping them to the corresponding robot actions.

3. Control Command Generation: After recognizing the user’s intention, the system generates corresponding control commands according to the results obtained, so as to control the robot’s actions.

SSVEP control command generation: For SSVEP signals, the system uses spectrum analysis to process the signals. Spectral analysis extracts which frequency the user’s current visual attention is focused on. Different flashing stimuli provided on the visual interface correspond to different movements of the robot, such as forward, left turn and right turn. By recognizing the frequency at which the user’s visual attention is located, the system is able to determine the user’s intention and generate corresponding control commands accordingly.

Motor Image Control Command Generation: For motor image signals, the system utilizes feature extraction and classification techniques to recognize the user’s imagined movements. When the user imagines a grasping motion, specific motor image signals are captured. The system recognizes these features by training machine learning algorithms and generates corresponding control commands based on the recognition results to instruct the robot to perform the grasping motion.

Recommended AI News: Aryn Releases New Open Source Conversational Search Stack

4. Visual servo module design: The visual servo module is designed to improve the performance and accuracy of the robot in performing the grasping task. This module can adjust the robot’s grasping attitude and strength in real time, making the grasping action more accurate and reliable. The visual servo module captures the real-time visual feedback from the robot performing the grasping task through the camera and combines it with the user’s motor image signal for dynamic adjustment.

Visual Feedback Acquisition: The camera captures real-time visual feedback from the robot as it performs a grasping task. This feedback may include the position of the robot’s end-effector (e.g., a mechanical gripper), its attitude, the position and shape of the target object, and so on.

Feature extraction: Extracting useful features from the visual feedback. These features may include information such as edges, colors, and shapes of the target object, as well as position and attitude information of the robot’s end-effector.

Control Command Adjustment: The features extracted from the visual feedback are combined with the user’s motor image signals to make dynamic adjustments. For example, if the user imagines grasping a more distant object, the system can adjust the robot’s grasping posture and strength accordingly, enabling the robot to better accomplish the grasping task.

Feedback control: The vision servo module monitors the robot’s progress in performing the grasping task in real time and provides feedback control according to the actual execution. If there is any error or instability in the grasping process, the system can make timely adjustments so that the robot can complete the grasping action more accurately.

With the visual servo module, WiMi’s multi-modal EEG-based hybrid BCI system with visual servo module can more flexibly adapt to different grasping scenarios and user intentions, providing a higher quality service experience. The introduction of this module enhances the autonomy and adaptability of the robot to perform grasping tasks, enabling the system to realize more complex and natural multi-modal control.

Recommended AI News: Drips and Insightin Health Join Forces to Tackle Low Engagement in Health Risk Assessments

Traditional brain-computer interface systems usually provide only a limited number of control commands, restricting the way the user interacts with the robot. WiMi’s multi-modal EEG-based hybrid BCI system with visual servo module combines different types of EEG signals to enable richer and more varied control commands, allowing users to visualize different actions or focus on different frequencies of stimuli to achieve more complex robot control, thus providing a more flexible and natural interaction experience. The user can visualize different movements or focus on different frequencies of stimuli to achieve more sophisticated robot control, thus providing a more flexible and natural interaction experience.

WiMi’s multi-modal EEG-based hybrid BCI system with visual servo module By combining multiple EEG signals, the system is able to recognize the user’s intention more accurately. For example, the combination of SSVEP signals and motor image signals enables higher control accuracy, while the introduction of the visual servo module enables real-time adjustments to the robot’s execution movements and improves the reliability and accuracy of the control commands, resulting in a robot that better responds to the user’s commands. The system is not only limited to robot control, but can also be applied to other fields, such as virtual reality, rehabilitation therapy, and control of assistive devices. The expansion of this technology provides new possibilities for the application scope of brain-computer interface technology and promotes the development of the human-computer interaction field.

WiMi’s multi-modal EEG-based hybrid BCI system with visual servo module involves the combination and application of multiple technologies, such as EEG signal processing, feature extraction, machine learning algorithms, and visual servo technology. Addressing the coordination and integration of these technologies, it promotes the further development of BCI technology and lays the foundation for higher-level brain-computer interface applications. WiMi’s multi-modal EEG-based hybrid BCI system with visual servo module provides richer and more diversified control commands, improves control accuracy and reliability, and expands the brain-computer interface applications. WiMi’s multi-modal EEG-based hybrid BCI system with visual servo module provides richer and more diversified control commands, improves control precision and reliability, expands the application fields of brain-computer interfaces, and improves the user experience and quality of life, as well as pushes forward the development and innovation of BCI technology, which is expected to play an important role in the field of intelligent robotics and human-computer interaction in the future.

[To share your insights with us, please write to sghosh@martechseries.com]

The post WiMi Developed Multi-Modal EEG-Based Hybrid BCI System With Visual Servo Module appeared first on AiThority.


devices

Medtech

ETF Talk: AI is ‘Big Generator’

Second nature comes alive Even if you close your eyes We exist through this strange device — Yes, “Big Generator” Artificial intelligence (AI) has…

Continue Reading
Medtech

Apple gets an appeals court win for its Apple Watch

Apple has at least a couple more weeks before it has to worry about another sales ban.

Continue Reading
Medtech

Federal court blocks ban on Apple Watches after Apple appeal

A federal appeals court has temporarily blocked a sweeping import ban on Apple’s latest smartwatches while the patent dispute winds its way through…

Continue Reading

Trending