Video Deflection Technology
Video Deflection Technology promises to be the future of many studies in Vibration Analysis. As the electronics evolve, new technologies in predictive maintenance raise as well, and so is the case of non invasive vibration Analysis. We never imagined we could measure vibration with a camera and even better, with our own pocket cellphone. So is the case of Video Deflection, Stay with me to learn a little bit more about this amazing technology.
What is the Looking Glass Technique?
Video Deflection Technology is a method of vibration analysis that utilizes modern slow speed camera technology found in everything from consumer grade cellular phones to expensive professional grade cameras combined with analytic software to identify micro-movements and amplify motion within a video that are not recognizable by the human eye.
Video Deflection Replaces Accelerometers with Target Locations
Utilizing a combination of algorithms, Video Deflection Technology software locates target areas of interest. Based upon identified angularity and color differentiation within a target video frame, this technology compares the movement of those targets from frame to frame. In fact, this method can create thousands of vibration analysis measurement points without ever having to use a traditional accelerometer.
Step 1: Identify the Targets
Comparing Targets with Static Zones
Once target locations of measurement are identified static zones need to be identified in order to compare the movement of targets to the movement of identified static areas.
Step 2: Compare Targets to Static Points
Repeatability and Reliability in Video Deflection Technology
Repeatable, Reliable, and NIST Traceable Calibration Video Deflection Technology offers 3 different calibration methods to enable the user to establish a reliable deflection model and extremely accurate point based vibration analysis data.
- Native Format Calibration – Utilizes a synchronous vibration analysis signal from a traditional accelerometer that is taken at the same time as the video recording from a specified location. This method synchronizes the vibration data to the video and thus provides the most accurate representation of the data presented.
- RMS Value Calibration – Utilizes a single reference x/y displacement value along with a specified location this method delivers the second most accurate representation of the data presented.
- Distance Calibration – Identifies the distance between two locations within the video frame and calculates displacement/mass transfer based upon the distance identified scaled throughout the model.
Step 3: Choose the Best Calibration Method Available
Identifying Dominant Forcing Frequency
The Looking Glass Technique identifies the dominant forcing frequency thus enabling the completion of a phase simulation of the applicable targets.
Step 4: Identify Dominant Forcing Frequencies
Step 5: Check Phase if Appropriate
Motion Detection Analytical Methods
Identifying the areas of the most interest can be difficult if done using the phase simulation method. Therefore, motion detection feature identifies and colorizes the areas of the most displacement found within the post processed video.
Step 6: Motion Detection Tool
Video Deflection – Creating a Motion Animated Amplified Model
Creating a Video Deflection Model that Amplifies Motion of the applicable targets with a video is a highly sought-after result within the motion augmentation field. To do so one can amplify the motion of the entire range of targets, focus on a specific range of targets based on filters or any number of independent ranges using filters.
Step 7: Add Filters
Step 8: Zoom To Areas of Interest or Create Video Deflection Model
Vibration Zone Detection
Virtually immediately. DragonVision™ is able to identify the areas with the greatest vibration through a micro-movement detection algorithm. Even isolating and filtering the movements of your own hand while holding the camera.
DragonVision™ incorporates an anti-aliasing filter that uses cross-channel comparison. In this way, nonexistent frequencies produced by the Aliasing phenomenon due to the low sampling rate of video cameras are eliminated from the FFT.
For more information on the “Aliasing” effect visit: https://en.wikipedia.org/wiki/Aliasing