• The Story of Shorting Home Capital | Marc Cohodes Outtake | Real Vision Video

    This Home Capital story didn't make it into the final cut of the Marc Cohodes interview on Real Vision, so you'll sure as hell want to see what did. Listen to Marc Cohodes on Real Vision and become a better investor: http://rvtv.io/2oiHI2f Real Vision's new flagship interview series premieres, as famed short seller Marc Cohodes joins Grant Williams for a candid and emotionally raw interview. In the first in a series of extended discussions with successful investors, Grant asks Marc about his short selection process, the difficulty of managing emotions when embroiled in a fight and his current battles with a series of companies he believes to be fraudulent. Marc offers a vicious takedown of biotech company MiMedx, and shares a gut-wrenching story of losing money during 2008, despite being ...

    published: 16 Dec 2017
  • Open Source TensorFlow Models (Google I/O '17)

    Come to this talk for a tour of the latest open source TensorFlow models for Image Classification, Natural Language Processing, and Computer Generated Artwork. Along the way, Josh Gordon will share thoughts on Deep Learning, open source research, and educational resources you can use to learn more. See all the talks from Google I/O '17 here: https://goo.gl/D0D4VE Subscribe to the Google Developers channel: http://goo.gl/mQyv5L Follow Josh on Twitter: https://twitter.com/random_forests #io17 #GoogleIO #GoogleIO2017

    published: 18 May 2017
  • FarmBot: open source backyard robot for a fully automated garden

    In the front yard of Rory Aronson’s San Luis Obispo home (that he shares with 9 roommates), a robot is tending his garden- seeding, watering, weeding and testing the soil- while he controls it from his his phone. FarmBot is what he calls “humanity's open-source automated precision farming machine”. https://farmbot.io/ As a student at Cal Poly San Luis Obispo he was inspired by a guest lecture in his organic agriculture class, “when a traditional farmer came in talking about some of the tractor technology he’s using on his farm and I looked at that and said, ‘Wait a minute, I can do that better’, explains Aronson. “The first thing that I thought of when I thought of the idea was, ‘Oh this probably exists let me go look it up’ and I scoured the Internet. I was amazed actually, that there wa...

    published: 25 Sep 2016
  • 6 Open Source Test Automation Frameworks You Need to Know

    http://www.joecolantonio.com/2016/05/10/6-open-source-test-automation-frameworks-need-know/ Before you fall into the “build your own framework” trap, be sure to check out these six open-source automation solutions. Serenity RobotFramework ReadwoodHQ Sahi Galen Framework Gauge

    published: 21 Jun 2016
  • A.I. Experiments: Giorgio Cam

    Check out https://g.co/aiexperiments to learn more. This is an experiment built with machine learning that lets you make music with the computer just by taking a picture. It uses image recognition to label what it sees, then it turns those labels into lyrics of a song. http://g.co/aiexperiments Built by Eric Rosenbaum, Yotam Mann, and friends at Google Creative Lab using MaryTTS, Tone.js, and Google Cloud Vision API. Features music by Giorgio Moroder. More resources: https://cloud.google.com/vision/ https://github.com/marytts/marytts https://github.com/Tonejs/Tone.js

    published: 15 Nov 2016
  • Introduction to the Intel® Aero Compute Board and Vision Accessory Kit for UAVs | Intel Software

    The Intel® Aero Compute Board and the Aero Vision Accessory Kit are purpose-built for integration with any unmanned aerial vehicle (UAV). These key ingredients of the Intel® Aero Platform for UAVs are geared for developers, researchers, and UAV enthusiasts to help get applications airborne quickly*. While only the size of a standard playing card, the Compute Board features abundant storage capabilities, 802.11ac Wi-Fi, support for multiple cameras including the Intel® RealSense™ camera (R200), which is part of the optional Vision Accessory Kit, industry standard interfaces, and reconfigurable I/O to facilitate connecting to a broad variety of drone hardware subsystems. The Compute Board ships with open-source embedded Linux operating system and is offered with sample applications and APIs...

    published: 30 Nov 2016
  • The KITTI Vision Benchmark Suite

    This benchmark suite was designed to provide challenging realistic datasets to the computer vision community. Our benchmarks currently evaluate stereo, optical flow, visual odometry, 3D object detection and tracking. If you want to contribute results of your method(s), have a look at our evaluation webserver at: http://www.cvlibs.net/datasets/kitti

    published: 14 Mar 2012
  • Computer Vision and Machine Learning, by Nick Wong

    A basic introduction to some fundamental concepts in machine learning using Tensorflow, coupled with an introduction to OpenCV2, a computer vision project.

    published: 31 Oct 2017
  • OpenCV Face Detection with Raspberry Pi - Robotics with Python p.7

    Next, we're going to touch on using OpenCV with the Raspberry Pi's camera, giving our robot the gift of sight. There are many steps involved to this process, so there's a lot that is about to be thrown your way. If at any point you're stuck/lost/whatever, feel free to ask questions on the video and I will try to help where possible. There are a lot of moving parts here. If all else fails, I have hosted my Raspberry Pi image: https://drive.google.com/file/d/0B11p78NlrG-vZzdJLWYxcU5iMXM/view?usp=sharing OpenCV stands for Open Computer Vision, and it is an open source computer vision and machine learning library. To start, you will need to get OpenCV on to your Raspberry Pi. http://mitchtech.net/raspberry-pi-opencv/ Keep in mind, the "make" part of this tutorial will take 9-10 hours on a ...

    published: 01 Sep 2015
  • How computers learn to recognize objects instantly | Joseph Redmon

    Ten years ago, researchers thought that getting a computer to tell the difference between a cat and a dog would be almost impossible. Today, computer vision systems do it with greater than 99 percent accuracy. How? Joseph Redmon works on the YOLO (You Only Look Once) system, an open-source method of object detection that can identify objects in images and video -- from zebras to stop signs -- with lightning-quick speed. In a remarkable live demo, Redmon shows off this important step forward for applications like self-driving cars, robotics and even cancer detection. Check out more TED talks: http://www.ted.com The TED Talks channel features the best talks and performances from the TED Conference, where the world's leading thinkers and doers give the talk of their lives in 18 minutes (or ...

    published: 18 Aug 2017
  • Introducing Face Detection in the Google Vision APIs (100 Days of Google Dev)

    The Google Vision APIs provide two main areas of functionality. First is Face Tracking -- not to be confused with Facial Recognition -- which gives your apps the ability to detect faces, and landmarks on faces. This is useful for, for example -- writing a camera app that only takes a picture when everyone is smiling, and nobody is blinking, or for fun apps where you can superimpose hats or moustaches on people in the camera preview window. Second is recognizing visual codes such as bar codes or QR codes, and making it easy for developers to build apps with them. This Dev Byte covers the first of these. A separate byte will cover bar/QR codes. 100 Days of Google Dev / 100 developer videos over 100 days / #GoogleDev100 Subscribe to the Google Developers channel at http://goo.gl/mQyv5L...

    published: 20 Aug 2015
  • Why Tokens Are The Biggest Opportunity In The Cryptocurrency Space | Dan Morehead Interview

    Watch the interview in full (and more) on Real Vision. Start your 7-day free trial: http://rvtv.io/2uxgNRE Dan Morehead from Pantera Capital was one of the first major investors in Bitcoin and the early cryptocurrency technology. With the advent of tokenization, Dan assesses the impact that the next wave of decentralized applications will have on the world’s most valuable data monopolies and where the winners will emerge, as the Blockchain disintermediates venture capital in a post-capitalist era. In this clip Dan Morehead explains why tokens are the biggest opportunity in the cryptocurrency space.

    published: 26 Jul 2017
  • Real time 3D reconstruction using Stereo vision

    using stereo camera and opencv 2.4.9 viz class 실시간 3차원 복원 프로그램

    published: 15 Dec 2015
  • Richard Baraniuk on open-source learning

    http://www.ted.com Rice University professor Richard Baraniuk explains the vision behind Connexions, his open-source, online education system. It cuts out the textbook, allowing teachers to share and modify course materials freely, anywhere in the world. TEDTalks is a daily video podcast of the best talks and performances from the TED Conference, where the world's leading thinkers and doers are invited to give the talk of their lives in 18 minutes. TED stands for Technology, Entertainment, and Design, and TEDTalks cover these topics as well as science, business, politics and the arts. Watch the Top 10 TEDTalks on TED.com, at http://www.ted.com/index.php/talks/top10 Follow us on Twitter http://www.twitter.com/tednews Checkout our Facebook page for TED exclusives https://www.fa...

    published: 12 Jan 2007
  • IOHK | Charles Hoskinson Keynote

    The technology was conceived in an Osaka restaurant more than two years ago and from that small beginning Cardano has been built into a leading cryptocurrency. The project has amassed a team of experts in countries around the world, has generated more than 67,000 lines of code, and has a strong and growing community in countries across Asia and beyond. Along the way, Cardano has set new standards for cryptocurrencies with best practices such as peer review and high assurance methods of software engineering. The official launch was held in the district of Shibuya in Tokyo on Saturday October 14 for an audience of about 500 people, who had each won a ticket through a lottery held on social media. Excited cryptocurrency enthusiasts, Ada holders and business people from across Japan queued to...

    published: 09 Nov 2017
  • Parallella: An open hardware platform for teaching parallel programming

    Making parallel computing easy to use has been described as "a problem as hard as any that computer science has faced." The goal of the Parallella project is to democratize access to parallel computing through affordable open hardware and open source tools so that the whole world can participate in solving this grande challenge problem.The talk will give an in depth technical review of the Parallella platform and will conclude by showing real parallel code samples written for Parallella in bare metal threads, MPI, OpenCL, and Java 8 fork/join. Author: Andreas Olofsson Andreas Olofsson (@adapteva) founded Adapteva in 2008 with a mission to create a new class of low power parallel processors. Andreas is the architect and designer of the Epiphany processor chips, currently the world's most ...

    published: 04 Jan 2016
  • open source tracking software

    Webpage and download http://openvisionc.sourceforge.net/ This video was quickly put together to show some features of a freely available software called open vision control. The initial aim was to create software for an automated paintball turret. It was then found that this type of software can be used for a multitude of applications. But having all these variations would include a large amount of coding and complexity. It was then decided to embed python into the software, allowing scripts to be written and shared as the control part. Scripts we have written and tested include controlling servos for a turret application, notifying a user by email if motion is detected, using a microscope to track microspheres and plot trajectories, control the mouse cursor using hand movements ...

    published: 24 Sep 2011
  • Raspberry Pi Robot Arm With Computer Vision + Image Processing Pics

    The robot arm controller is a Raspberry Pi 2 Model B. The Servomotors are Dynamixel AX-12A. There is a Raspberry Pi camera module mounted on the top for image processing. The Computer Vision algorithms applied here are Edge Detection, Binarization, Pixel Expansion, Labeling and Object Extraction. In this Video I tried to show how the robot see’s the world by adding pictures directly out of the Image Processing algorithms (I just added the coloring in the Labeling process). I also tried to sync the pictures to the superb music of the great artist “broke for free”. Here's some further info on the thing: I didn’t use OpenCV. The image processing algorithms applied here are all very simple. I wanted to write them by my own. Two important libraries which I used are pythons "picamera" and a l...

    published: 08 Dec 2015
  • Keynote: Open Source Networking and a Vision of Fully Automated Networks - Arpit Joshipura

    Keynote: Open Source Networking and a Vision of Fully Automated Networks - Arpit Joshipura, General Manager, Networking, The Linux Foundation  A disruption in 140+ year old telecom industry is making networking cool again with SDN/NFV, 5G, IOT, and AI at the heart of network automation. This talk will focus on how Carriers, Enterprises and Cloud Service providers are bracing for a shift from proprietary to open source; and how the Linux Foundation is in the middle of this with projects like ONAP, ODL, OPNFV and more. About Arpit Joshipura Arpit brings over 25 years of networking expertise and vision to The Linux Foundation with technical depth and business breadth. He has instrumented and led major industry disruptions across Enterprises, Carriers and Cloud architectures including IP, ...

    published: 25 Oct 2017
  • REAL VISION - Technology and Real Estate, Richard Gerritsen, Yardi Systems

    PropertyEU as part of the REAL VISION series, speaks to Richard Gerritsen of Yardi Systems about the growing influence of technology, data and information on the real estate sector, residential, retail markets, asset management and investment. Interviewed by Richard Betts, Publisher of PropertyEU, November 2016 Copyright: PropertyTV, PropertyEU

    published: 02 Dec 2016
  • Khang Hoang: Vision - Improve Your Workflow - JSConf.Asia 2016

    Remember the times when you needed to hardcode an API response because the actual APIs are still to be implemented? Making it impossible to simulate edge cases with the real APIs or the network? The wait is over! Khang has created just the tool for those situations. Khang works as React Native developer at Employment Hero by day. He is a JavaScript hacker & open source contributor by night. He organizes the React Native facebook community group for Vietnam. JSConf.Asia - Capitol Theatre, Singapore - 25+26 November 2016. Source: https://2016.jsconf.asia/ License: For reuse of this video under a more permissive license please get in touch with us. The speakers retain the copyright for their performances.

    published: 19 Dec 2016
  • iOS 11 ARKit + Vision Framework = ARPaint

    This is how real future of AR combined with Computer Vision may looks like. Amazing project by OSAMA ABDELKARIM ABOULHASSAN with open source code and detailed tutorial: - https://www.toptal.com/swift/ios-arkit-tutorial-drawing-in-air-with-fingers - https://github.com/oabdelkarim/ARPaint

    published: 10 Aug 2017
  • Open Source Robot Sensor Fusion IMU, Quadrature Encoders, Vision Processing and LIDAR

    published: 28 Sep 2017
  • Mark Blyth’s State of the Union - 2018

    Jan 4th, 2018 Mark Blyth is a political economist whose research focuses upon how uncertainty and randomness impact complex systems, particularly economic systems, and why people continue to believe stupid economic ideas despite buckets of evidence to the contrary. Interview with Open Source with Christopher Lydon https://www.patreon.com/radioopensource

    published: 05 Jan 2018
developed with YouTube
The Story of Shorting Home Capital | Marc Cohodes Outtake | Real Vision Video

The Story of Shorting Home Capital | Marc Cohodes Outtake | Real Vision Video

  • Order:
  • Duration: 14:27
  • Updated: 16 Dec 2017
  • views: 8606
videos
This Home Capital story didn't make it into the final cut of the Marc Cohodes interview on Real Vision, so you'll sure as hell want to see what did. Listen to Marc Cohodes on Real Vision and become a better investor: http://rvtv.io/2oiHI2f Real Vision's new flagship interview series premieres, as famed short seller Marc Cohodes joins Grant Williams for a candid and emotionally raw interview. In the first in a series of extended discussions with successful investors, Grant asks Marc about his short selection process, the difficulty of managing emotions when embroiled in a fight and his current battles with a series of companies he believes to be fraudulent. Marc offers a vicious takedown of biotech company MiMedx, and shares a gut-wrenching story of losing money during 2008, despite being perfectly positioned.
https://wn.com/The_Story_Of_Shorting_Home_Capital_|_Marc_Cohodes_Outtake_|_Real_Vision_Video
Open Source TensorFlow Models (Google I/O '17)

Open Source TensorFlow Models (Google I/O '17)

  • Order:
  • Duration: 33:37
  • Updated: 18 May 2017
  • views: 47092
videos
Come to this talk for a tour of the latest open source TensorFlow models for Image Classification, Natural Language Processing, and Computer Generated Artwork. Along the way, Josh Gordon will share thoughts on Deep Learning, open source research, and educational resources you can use to learn more. See all the talks from Google I/O '17 here: https://goo.gl/D0D4VE Subscribe to the Google Developers channel: http://goo.gl/mQyv5L Follow Josh on Twitter: https://twitter.com/random_forests #io17 #GoogleIO #GoogleIO2017
https://wn.com/Open_Source_Tensorflow_Models_(Google_I_O_'17)
FarmBot: open source backyard robot for a fully automated garden

FarmBot: open source backyard robot for a fully automated garden

  • Order:
  • Duration: 31:44
  • Updated: 25 Sep 2016
  • views: 227430
videos
In the front yard of Rory Aronson’s San Luis Obispo home (that he shares with 9 roommates), a robot is tending his garden- seeding, watering, weeding and testing the soil- while he controls it from his his phone. FarmBot is what he calls “humanity's open-source automated precision farming machine”. https://farmbot.io/ As a student at Cal Poly San Luis Obispo he was inspired by a guest lecture in his organic agriculture class, “when a traditional farmer came in talking about some of the tractor technology he’s using on his farm and I looked at that and said, ‘Wait a minute, I can do that better’, explains Aronson. “The first thing that I thought of when I thought of the idea was, ‘Oh this probably exists let me go look it up’ and I scoured the Internet. I was amazed actually, that there was not a CNC-type farming equipment already existing so I said, well, I guess it’s up to me.” During the summer after graduation Aronson wrote a white paper to outline his ideas and within days he had the attention of “software developers, open-source enthusiasts, ag specialists, mechanical engineers, and more”. After several years of iterations and a crowdfunding campaign that has raised over a million dollars, the FarmBot team (Rory and programmers based worldwide) will release the FarmBot Genesis in early 2017. Using an Arduino and Raspberry Pi, FarmBots are “giant 3D printers, but instead of extruding plastic, its tools are seed injectors, watering nozzles, sensors, and more.” If you want to print your own, the specs are all free and open source, but if you’d rather buy an all-inclusive kit, it will cost you $2900, a number Aronson says will come down with time. He sees it as a long-term investment. “Because it’s so based in software, all of the functions, it will get better over time so even if you bought a kit today the hardware won’t change, but the software will allow it to do more and more things over time”. “My long-term vision for FarmBot is that it’s a home appliance,” explains Aronson. “Just like everyone has a refrigerator and a washing machine and a drier maybe you have a Farmbot too and in the backyard doing it’s thing and it’s like a utility that you use. You turn on the water on your faucet and water comes out, you go out into your backyard and there’s food that’s been grown for you.” Original story: https://faircompanies.com/videos/open-source-bot-plants-maintains-your-garden-when-you-cant/
https://wn.com/Farmbot_Open_Source_Backyard_Robot_For_A_Fully_Automated_Garden
6 Open Source Test Automation Frameworks You Need to Know

6 Open Source Test Automation Frameworks You Need to Know

  • Order:
  • Duration: 5:53
  • Updated: 21 Jun 2016
  • views: 21530
videos
http://www.joecolantonio.com/2016/05/10/6-open-source-test-automation-frameworks-need-know/ Before you fall into the “build your own framework” trap, be sure to check out these six open-source automation solutions. Serenity RobotFramework ReadwoodHQ Sahi Galen Framework Gauge
https://wn.com/6_Open_Source_Test_Automation_Frameworks_You_Need_To_Know
A.I. Experiments: Giorgio Cam

A.I. Experiments: Giorgio Cam

  • Order:
  • Duration: 1:43
  • Updated: 15 Nov 2016
  • views: 107753
videos
Check out https://g.co/aiexperiments to learn more. This is an experiment built with machine learning that lets you make music with the computer just by taking a picture. It uses image recognition to label what it sees, then it turns those labels into lyrics of a song. http://g.co/aiexperiments Built by Eric Rosenbaum, Yotam Mann, and friends at Google Creative Lab using MaryTTS, Tone.js, and Google Cloud Vision API. Features music by Giorgio Moroder. More resources: https://cloud.google.com/vision/ https://github.com/marytts/marytts https://github.com/Tonejs/Tone.js
https://wn.com/A.I._Experiments_Giorgio_Cam
Introduction to the Intel® Aero Compute Board and Vision Accessory Kit for UAVs | Intel Software

Introduction to the Intel® Aero Compute Board and Vision Accessory Kit for UAVs | Intel Software

  • Order:
  • Duration: 3:20
  • Updated: 30 Nov 2016
  • views: 11394
videos
The Intel® Aero Compute Board and the Aero Vision Accessory Kit are purpose-built for integration with any unmanned aerial vehicle (UAV). These key ingredients of the Intel® Aero Platform for UAVs are geared for developers, researchers, and UAV enthusiasts to help get applications airborne quickly*. While only the size of a standard playing card, the Compute Board features abundant storage capabilities, 802.11ac Wi-Fi, support for multiple cameras including the Intel® RealSense™ camera (R200), which is part of the optional Vision Accessory Kit, industry standard interfaces, and reconfigurable I/O to facilitate connecting to a broad variety of drone hardware subsystems. The Compute Board ships with open-source embedded Linux operating system and is offered with sample applications and APIs for flight and vision interfaces, reducing hurdles for developers of sophisticated drone applications. Learn more about the Intel® Aero Compute Board: http://intel.ly/2fAkLTJ Learn more about the Intel® Aero Ready to Fly Drone http://intel.ly/2fAouRb SUBSCRIBE NOW: http://bit.ly/2iZTCsz About Intel Software: The Intel® Developer Zone encourages and supports software developers that are developing applications for Intel hardware and software products. The Intel Software YouTube channel is a place to learn tips and tricks, get the latest news, watch product demos from both Intel, and our many partners across multiple fields. You'll find videos covering the topics listed below, and to learn more you can follow the links provided! Connect with Intel Software: Visit INTEL SOFTWARE WEBSITE: http://intel.ly/2j1UJYC Like INTEL SOFTWARE on FACEBOOK: http://bit.ly/2z8MPFF Follow INTEL SOFTWARE on TWITTER: http://bit.ly/2zahGSn INTEL SOFTWARE GITHUB: http://bit.ly/2zaih6z INTEL DEVELOPER ZONE LINKEDIN: http://bit.ly/2z979qs INTEL DEVELOPER ZONE INSTAGRAM: http://bit.ly/2z9Xsby INTEL GAME DEV TWITCH: http://bit.ly/2BkNshu Introduction to the Intel® Aero Compute Board and Vision Accessory Kit for UAVs | Intel SoftwareB_ahdR4od2Q
https://wn.com/Introduction_To_The_Intel®_Aero_Compute_Board_And_Vision_Accessory_Kit_For_Uavs_|_Intel_Software
The KITTI Vision Benchmark Suite

The KITTI Vision Benchmark Suite

  • Order:
  • Duration: 4:56
  • Updated: 14 Mar 2012
  • views: 21653
videos
This benchmark suite was designed to provide challenging realistic datasets to the computer vision community. Our benchmarks currently evaluate stereo, optical flow, visual odometry, 3D object detection and tracking. If you want to contribute results of your method(s), have a look at our evaluation webserver at: http://www.cvlibs.net/datasets/kitti
https://wn.com/The_Kitti_Vision_Benchmark_Suite
Computer Vision and Machine Learning, by Nick Wong

Computer Vision and Machine Learning, by Nick Wong

  • Order:
  • Duration: 59:52
  • Updated: 31 Oct 2017
  • views: 3990
videos
A basic introduction to some fundamental concepts in machine learning using Tensorflow, coupled with an introduction to OpenCV2, a computer vision project.
https://wn.com/Computer_Vision_And_Machine_Learning,_By_Nick_Wong
OpenCV Face Detection with Raspberry Pi - Robotics with Python p.7

OpenCV Face Detection with Raspberry Pi - Robotics with Python p.7

  • Order:
  • Duration: 22:09
  • Updated: 01 Sep 2015
  • views: 192738
videos
Next, we're going to touch on using OpenCV with the Raspberry Pi's camera, giving our robot the gift of sight. There are many steps involved to this process, so there's a lot that is about to be thrown your way. If at any point you're stuck/lost/whatever, feel free to ask questions on the video and I will try to help where possible. There are a lot of moving parts here. If all else fails, I have hosted my Raspberry Pi image: https://drive.google.com/file/d/0B11p78NlrG-vZzdJLWYxcU5iMXM/view?usp=sharing OpenCV stands for Open Computer Vision, and it is an open source computer vision and machine learning library. To start, you will need to get OpenCV on to your Raspberry Pi. http://mitchtech.net/raspberry-pi-opencv/ Keep in mind, the "make" part of this tutorial will take 9-10 hours on a Raspberry Pi Model B+. The Raspberry Pi 2 will do it in more like 2-4 hours. Either way, it will take a while. I just did it overnight one night. Text-based version and sample code: http://pythonprogramming.net/raspberry-pi-camera-opencv-face-detection-tutorial/ http://pythonprogramming.net https://twitter.com/sentdex
https://wn.com/Opencv_Face_Detection_With_Raspberry_Pi_Robotics_With_Python_P.7
How computers learn to recognize objects instantly | Joseph Redmon

How computers learn to recognize objects instantly | Joseph Redmon

  • Order:
  • Duration: 7:38
  • Updated: 18 Aug 2017
  • views: 146968
videos
Ten years ago, researchers thought that getting a computer to tell the difference between a cat and a dog would be almost impossible. Today, computer vision systems do it with greater than 99 percent accuracy. How? Joseph Redmon works on the YOLO (You Only Look Once) system, an open-source method of object detection that can identify objects in images and video -- from zebras to stop signs -- with lightning-quick speed. In a remarkable live demo, Redmon shows off this important step forward for applications like self-driving cars, robotics and even cancer detection. Check out more TED talks: http://www.ted.com The TED Talks channel features the best talks and performances from the TED Conference, where the world's leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design -- plus science, business, global issues, the arts and more. Follow TED on Twitter: http://www.twitter.com/TEDTalks Like TED on Facebook: https://www.facebook.com/TED Subscribe to our channel: https://www.youtube.com/TED
https://wn.com/How_Computers_Learn_To_Recognize_Objects_Instantly_|_Joseph_Redmon
Introducing Face Detection in the Google Vision APIs (100 Days of Google Dev)

Introducing Face Detection in the Google Vision APIs (100 Days of Google Dev)

  • Order:
  • Duration: 4:00
  • Updated: 20 Aug 2015
  • views: 40248
videos
The Google Vision APIs provide two main areas of functionality. First is Face Tracking -- not to be confused with Facial Recognition -- which gives your apps the ability to detect faces, and landmarks on faces. This is useful for, for example -- writing a camera app that only takes a picture when everyone is smiling, and nobody is blinking, or for fun apps where you can superimpose hats or moustaches on people in the camera preview window. Second is recognizing visual codes such as bar codes or QR codes, and making it easy for developers to build apps with them. This Dev Byte covers the first of these. A separate byte will cover bar/QR codes. 100 Days of Google Dev / 100 developer videos over 100 days / #GoogleDev100 Subscribe to the Google Developers channel at http://goo.gl/mQyv5L
https://wn.com/Introducing_Face_Detection_In_The_Google_Vision_Apis_(100_Days_Of_Google_Dev)
Why Tokens Are The Biggest Opportunity In The Cryptocurrency Space | Dan Morehead Interview

Why Tokens Are The Biggest Opportunity In The Cryptocurrency Space | Dan Morehead Interview

  • Order:
  • Duration: 4:06
  • Updated: 26 Jul 2017
  • views: 1452
videos
Watch the interview in full (and more) on Real Vision. Start your 7-day free trial: http://rvtv.io/2uxgNRE Dan Morehead from Pantera Capital was one of the first major investors in Bitcoin and the early cryptocurrency technology. With the advent of tokenization, Dan assesses the impact that the next wave of decentralized applications will have on the world’s most valuable data monopolies and where the winners will emerge, as the Blockchain disintermediates venture capital in a post-capitalist era. In this clip Dan Morehead explains why tokens are the biggest opportunity in the cryptocurrency space.
https://wn.com/Why_Tokens_Are_The_Biggest_Opportunity_In_The_Cryptocurrency_Space_|_Dan_Morehead_Interview
Real time 3D reconstruction using Stereo vision

Real time 3D reconstruction using Stereo vision

  • Order:
  • Duration: 0:52
  • Updated: 15 Dec 2015
  • views: 2736
videos
using stereo camera and opencv 2.4.9 viz class 실시간 3차원 복원 프로그램
https://wn.com/Real_Time_3D_Reconstruction_Using_Stereo_Vision
Richard Baraniuk on open-source learning

Richard Baraniuk on open-source learning

  • Order:
  • Duration: 19:20
  • Updated: 12 Jan 2007
  • views: 93904
videos
http://www.ted.com Rice University professor Richard Baraniuk explains the vision behind Connexions, his open-source, online education system. It cuts out the textbook, allowing teachers to share and modify course materials freely, anywhere in the world. TEDTalks is a daily video podcast of the best talks and performances from the TED Conference, where the world's leading thinkers and doers are invited to give the talk of their lives in 18 minutes. TED stands for Technology, Entertainment, and Design, and TEDTalks cover these topics as well as science, business, politics and the arts. Watch the Top 10 TEDTalks on TED.com, at http://www.ted.com/index.php/talks/top10 Follow us on Twitter http://www.twitter.com/tednews Checkout our Facebook page for TED exclusives https://www.facebook.com/TED
https://wn.com/Richard_Baraniuk_On_Open_Source_Learning
IOHK | Charles Hoskinson Keynote

IOHK | Charles Hoskinson Keynote

  • Order:
  • Duration: 54:56
  • Updated: 09 Nov 2017
  • views: 4318
videos
The technology was conceived in an Osaka restaurant more than two years ago and from that small beginning Cardano has been built into a leading cryptocurrency. The project has amassed a team of experts in countries around the world, has generated more than 67,000 lines of code, and has a strong and growing community in countries across Asia and beyond. Along the way, Cardano has set new standards for cryptocurrencies with best practices such as peer review and high assurance methods of software engineering. The official launch was held in the district of Shibuya in Tokyo on Saturday October 14 for an audience of about 500 people, who had each won a ticket through a lottery held on social media. Excited cryptocurrency enthusiasts, Ada holders and business people from across Japan queued to get Cardano t-shirts and souvenir physical Ada coins, before going into the main hall to hear about how Cardano was created and the vision for its future. “The first thing we did when we knew the project was real was to build great partnerships,” Charles Hoskinson, founder and CEO of IOHK, told the audience. “Our chief scientist is based at University of Edinburgh, it is a wonderful place, where they built the heart of Cardano. We have a lot of wonderful people at the University of Athens, they are rigorous, making sure that the theory works. And we have people at Tokyo Tech who work on multi party computation and look to the future, and work out how to make Cardano last a long time.” The vision for Cardano, Hoskinson said, was that it would pull together academic research and bright ideas from computer science to produce a cryptocurrency capable of much more than its predecessors. This “third generation” cryptocurrency would be able to scale to a billion users, using a proof of stake algorithm, Ouroboros, which avoided the huge energy consumption of proof of work cryptocurrencies. Features that would be added to Cardano to help it scale included sidechains, trusted hardware, and RINA, or recursive internetwork architecture. Sustainability would be part of the design by way of a treasury system to fund development indefinitely, allowing stakeholders to vote on proposed changes to the protocol. Meanwhile, the computation layer of the technology, would be innovative in using a tool called K Framework to allow developers to write smart contracts in the programming language of their choice, he said. Security is paramount to cryptocurrency because flaws in code increase the risk of hacks and the loss of coin holder funds, unfortunately witnessed too often. With that in mind, Duncan Coutts, head of engineering at IOHK, explained how the company approaches software development: cryptography research papers are translated into code using the technique of formal specification. This involves a series of mathematical steps that progressively take the cryptography closer to the code that the developers write, a process that allows checks to be made that the specifications are indeed correct. After the presentation crowds formed outside the hall to have their photos taken with the Cardano team. Some people who came along were longstanding supporters of the project, such as Naomi Nisiguchi, from Mie Prefecture. She works as a manager in the construction industry and has had an interest in cryptocurrency for four years. “Around two years ago I heard about Ada and that Charles Hoskinson was involved,” she said. “I’ve been following the news on Facebook and I’m very interested to learn how the project will move on.” -- The Cardano Portfolio The Cardano Hub the source for all things Cardano https://www.cardanohub.org/en/home/ Cardano Blockchain Explorer An open source block explorer for the Cardano project https://cardanoexplorer.com Cardano Documentation Full technical documentation of the project https://cardanodocs.com Cardano Roadmap Development path of the Cardano project https://cardanoroadmap.com Why Cardano The philosophy behind the project https://whycardano.com Daedalus Platform Open source platform https://daedaluswallet.io The Cardano Foundation Supervisory and educational body for the Cardano Protocol https://cardanofoundation.org Cardano Foundation YouTube All the latest videos & tutorials https://www.youtube.com/channel/UCbQ9... Cardano Foundation Follow the Foundation https://twitter.com/CardanoStiftung Cardano Slack Join the conversation https://cardano.herokuapp.com Cardano reddit Join the conversation https://www.reddit.com/r/cardano/ IOHK Development partner https://iohk.io IOHK blog Read about the latest technology advancements https://iohk.io/blog/ —
https://wn.com/Iohk_|_Charles_Hoskinson_Keynote
Parallella: An open hardware platform for teaching parallel programming

Parallella: An open hardware platform for teaching parallel programming

  • Order:
  • Duration: 1:00:30
  • Updated: 04 Jan 2016
  • views: 13955
videos
Making parallel computing easy to use has been described as "a problem as hard as any that computer science has faced." The goal of the Parallella project is to democratize access to parallel computing through affordable open hardware and open source tools so that the whole world can participate in solving this grande challenge problem.The talk will give an in depth technical review of the Parallella platform and will conclude by showing real parallel code samples written for Parallella in bare metal threads, MPI, OpenCL, and Java 8 fork/join. Author: Andreas Olofsson Andreas Olofsson (@adapteva) founded Adapteva in 2008 with a mission to create a new class of low power parallel processors. Andreas is the architect and designer of the Epiphany processor chips, currently the world's most energy efficient microprocessors in the world. In the fall of 2012, Adapteva launched the Parallella parallel computing project with the goal of producing a $99 credit card sized "supercomputer" that consumes less than 5 watts. Prior to starting Adapteva, Andreas worked at Analog Devices for 10 years developing energy efficiency DSPs and mixed signal SOCs. Andreas holds a BS degree in Physics and BS/MS degrees in Electrical Engineering from the University of Pennsylvania. Blog: http://www.adapteva.com/
https://wn.com/Parallella_An_Open_Hardware_Platform_For_Teaching_Parallel_Programming
open source tracking software

open source tracking software

  • Order:
  • Duration: 2:29
  • Updated: 24 Sep 2011
  • views: 42025
videos
Webpage and download http://openvisionc.sourceforge.net/ This video was quickly put together to show some features of a freely available software called open vision control. The initial aim was to create software for an automated paintball turret. It was then found that this type of software can be used for a multitude of applications. But having all these variations would include a large amount of coding and complexity. It was then decided to embed python into the software, allowing scripts to be written and shared as the control part. Scripts we have written and tested include controlling servos for a turret application, notifying a user by email if motion is detected, using a microscope to track microspheres and plot trajectories, control the mouse cursor using hand movements etc. The software is still being improved , and worked on. The next stage would be to rewrite the GUI using Qt, improve performance, and add features such as barcode scanning. A quick install guide for windows xp and 7: 1.download and extract software 2.install opencv2.1 3.install python27 4.make sure you have all the .net dependencies. Cheers
https://wn.com/Open_Source_Tracking_Software
Raspberry Pi Robot Arm With Computer Vision + Image Processing Pics

Raspberry Pi Robot Arm With Computer Vision + Image Processing Pics

  • Order:
  • Duration: 3:10
  • Updated: 08 Dec 2015
  • views: 66019
videos
The robot arm controller is a Raspberry Pi 2 Model B. The Servomotors are Dynamixel AX-12A. There is a Raspberry Pi camera module mounted on the top for image processing. The Computer Vision algorithms applied here are Edge Detection, Binarization, Pixel Expansion, Labeling and Object Extraction. In this Video I tried to show how the robot see’s the world by adding pictures directly out of the Image Processing algorithms (I just added the coloring in the Labeling process). I also tried to sync the pictures to the superb music of the great artist “broke for free”. Here's some further info on the thing: I didn’t use OpenCV. The image processing algorithms applied here are all very simple. I wanted to write them by my own. Two important libraries which I used are pythons "picamera" and a library called "ax12". "picamera" provides an easy way to get greyscale pixeldata from the Raspberry Pi camera module. "ax12" is used for the communication with the Dynamixel AX-12A servos. I did write some code to make the servomotors move smoother (starting and stopping in a smooth sinusoidal manner). And then there was a bit of code to actually get the junctions into positions, which would allow the electromagnet to pick up the metallic things. In other words, this was about getting the thing to move correctly given some x and y values which were extracted from the image earlier. My Blog about the thing: https://electrondust.com/2017/10/28/raspberry-pi-robot-arm-with-simple-computer-vision/ Sourcecode: https://github.com/T-Kuhn/ScrewPicker Music: "Night Owl" by Broke For Free http://www.brokeforfree.com
https://wn.com/Raspberry_Pi_Robot_Arm_With_Computer_Vision_Image_Processing_Pics
Keynote: Open Source Networking and a Vision of Fully Automated Networks - Arpit Joshipura

Keynote: Open Source Networking and a Vision of Fully Automated Networks - Arpit Joshipura

  • Order:
  • Duration: 18:03
  • Updated: 25 Oct 2017
  • views: 313
videos
Keynote: Open Source Networking and a Vision of Fully Automated Networks - Arpit Joshipura, General Manager, Networking, The Linux Foundation  A disruption in 140+ year old telecom industry is making networking cool again with SDN/NFV, 5G, IOT, and AI at the heart of network automation. This talk will focus on how Carriers, Enterprises and Cloud Service providers are bracing for a shift from proprietary to open source; and how the Linux Foundation is in the middle of this with projects like ONAP, ODL, OPNFV and more. About Arpit Joshipura Arpit brings over 25 years of networking expertise and vision to The Linux Foundation with technical depth and business breadth. He has instrumented and led major industry disruptions across Enterprises, Carriers and Cloud architectures including IP, Broadband, Optical, Mobile, Routing, Switching, L4-7, Cloud, Disaggregation, SDN/NFV, Open Networking and has been an early evangelist for open source. Arpit has served as CMO/VP in startups and larger enterprises including Prevoty, Dell/Force10, Ericsson/Redback, ONI/CIENA and BNR/Nortel leading strategy, product management, marketing, engineering and technology standards functions.
https://wn.com/Keynote_Open_Source_Networking_And_A_Vision_Of_Fully_Automated_Networks_Arpit_Joshipura
REAL VISION - Technology and Real Estate, Richard Gerritsen, Yardi Systems

REAL VISION - Technology and Real Estate, Richard Gerritsen, Yardi Systems

  • Order:
  • Duration: 7:09
  • Updated: 02 Dec 2016
  • views: 99
videos
PropertyEU as part of the REAL VISION series, speaks to Richard Gerritsen of Yardi Systems about the growing influence of technology, data and information on the real estate sector, residential, retail markets, asset management and investment. Interviewed by Richard Betts, Publisher of PropertyEU, November 2016 Copyright: PropertyTV, PropertyEU
https://wn.com/Real_Vision_Technology_And_Real_Estate,_Richard_Gerritsen,_Yardi_Systems
Khang Hoang: Vision - Improve Your Workflow - JSConf.Asia 2016

Khang Hoang: Vision - Improve Your Workflow - JSConf.Asia 2016

  • Order:
  • Duration: 20:53
  • Updated: 19 Dec 2016
  • views: 2679
videos
Remember the times when you needed to hardcode an API response because the actual APIs are still to be implemented? Making it impossible to simulate edge cases with the real APIs or the network? The wait is over! Khang has created just the tool for those situations. Khang works as React Native developer at Employment Hero by day. He is a JavaScript hacker & open source contributor by night. He organizes the React Native facebook community group for Vietnam. JSConf.Asia - Capitol Theatre, Singapore - 25+26 November 2016. Source: https://2016.jsconf.asia/ License: For reuse of this video under a more permissive license please get in touch with us. The speakers retain the copyright for their performances.
https://wn.com/Khang_Hoang_Vision_Improve_Your_Workflow_Jsconf.Asia_2016
iOS 11 ARKit + Vision Framework = ARPaint

iOS 11 ARKit + Vision Framework = ARPaint

  • Order:
  • Duration: 1:01
  • Updated: 10 Aug 2017
  • views: 1221
videos
This is how real future of AR combined with Computer Vision may looks like. Amazing project by OSAMA ABDELKARIM ABOULHASSAN with open source code and detailed tutorial: - https://www.toptal.com/swift/ios-arkit-tutorial-drawing-in-air-with-fingers - https://github.com/oabdelkarim/ARPaint
https://wn.com/Ios_11_Arkit_Vision_Framework_Arpaint
Open Source Robot Sensor Fusion  IMU, Quadrature Encoders, Vision Processing and LIDAR

Open Source Robot Sensor Fusion IMU, Quadrature Encoders, Vision Processing and LIDAR

  • Order:
  • Duration: 41:52
  • Updated: 28 Sep 2017
  • views: 88
videos
https://wn.com/Open_Source_Robot_Sensor_Fusion_Imu,_Quadrature_Encoders,_Vision_Processing_And_Lidar
Mark Blyth’s State of the Union - 2018

Mark Blyth’s State of the Union - 2018

  • Order:
  • Duration: 49:57
  • Updated: 05 Jan 2018
  • views: 143
videos
Jan 4th, 2018 Mark Blyth is a political economist whose research focuses upon how uncertainty and randomness impact complex systems, particularly economic systems, and why people continue to believe stupid economic ideas despite buckets of evidence to the contrary. Interview with Open Source with Christopher Lydon https://www.patreon.com/radioopensource
https://wn.com/Mark_Blyth’S_State_Of_The_Union_2018
×