• Keynote: Open Source Networking and a Vision of Fully Automated Networks - Arpit Joshipura

    Keynote: Open Source Networking and a Vision of Fully Automated Networks - Arpit Joshipura, General Manager, Networking, The Linux Foundation  A disruption in 140+ year old telecom industry is making networking cool again with SDN/NFV, 5G, IOT, and AI at the heart of network automation. This talk will focus on how Carriers, Enterprises and Cloud Service providers are bracing for a shift from proprietary to open source; and how the Linux Foundation is in the middle of this with projects like ONAP, ODL, OPNFV and more. About Arpit Joshipura Arpit brings over 25 years of networking expertise and vision to The Linux Foundation with technical depth and business breadth. He has instrumented and led major industry disruptions across Enterprises, Carriers and Cloud architectures including IP, ...

    published: 25 Oct 2017
  • Open Source TensorFlow Models (Google I/O '17)

    Come to this talk for a tour of the latest open source TensorFlow models for Image Classification, Natural Language Processing, and Computer Generated Artwork. Along the way, Josh Gordon will share thoughts on Deep Learning, open source research, and educational resources you can use to learn more. See all the talks from Google I/O '17 here: https://goo.gl/D0D4VE Subscribe to the Google Developers channel: http://goo.gl/mQyv5L Follow Josh on Twitter: https://twitter.com/random_forests #io17 #GoogleIO #GoogleIO2017

    published: 18 May 2017
  • How computers learn to recognize objects instantly | Joseph Redmon

    Ten years ago, researchers thought that getting a computer to tell the difference between a cat and a dog would be almost impossible. Today, computer vision systems do it with greater than 99 percent accuracy. How? Joseph Redmon works on the YOLO (You Only Look Once) system, an open-source method of object detection that can identify objects in images and video -- from zebras to stop signs -- with lightning-quick speed. In a remarkable live demo, Redmon shows off this important step forward for applications like self-driving cars, robotics and even cancer detection. Check out more TED talks: http://www.ted.com The TED Talks channel features the best talks and performances from the TED Conference, where the world's leading thinkers and doers give the talk of their lives in 18 minutes (or ...

    published: 18 Aug 2017
  • OSN Days 2017: Open Source Networking

    Phil Robb, VP Operations, Networking & Orchestration, The Linux Foundation.

    published: 09 Nov 2017
  • Richard Baraniuk on open-source learning

    http://www.ted.com Rice University professor Richard Baraniuk explains the vision behind Connexions, his open-source, online education system. It cuts out the textbook, allowing teachers to share and modify course materials freely, anywhere in the world. TEDTalks is a daily video podcast of the best talks and performances from the TED Conference, where the world's leading thinkers and doers are invited to give the talk of their lives in 18 minutes. TED stands for Technology, Entertainment, and Design, and TEDTalks cover these topics as well as science, business, politics and the arts. Watch the Top 10 TEDTalks on TED.com, at http://www.ted.com/index.php/talks/top10 Follow us on Twitter http://www.twitter.com/tednews Checkout our Facebook page for TED exclusives https://www.fa...

    published: 12 Jan 2007
  • Beth Noveck: Demand a more open-source government

    What can governments learn from the open-data revolution? In this stirring talk, Beth Noveck, the former deputy CTO at the White House, shares a vision of practical openness -- connecting bureaucracies to citizens, sharing data, creating a truly participatory democracy. Imagine the "writable society" ... TEDTalks is a daily video podcast of the best talks and performances from the TED Conference, where the world's leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design -- plus science, business, global issues, the arts and much more. Find closed captions and translated subtitles in many languages at http://www.ted.com/translate Follow TED news on Twitter: http://www.twitter.com/tednews Like TED on Facebook: ...

    published: 27 Sep 2012
  • Open Source Learning: David Preston at TEDxUCLA

    David Preston holds a Ph.D. in Education Policy from the UCLA Graduate School of Education & Information Science. He has taught at universities and graduate institutes and consulted on matters of learning and organizational development for 20 years. For the past seven years, David has also taught English for students of all ability levels in grades 9-12 in Los Angeles and on California's central coast. About TEDx, x = independently organized event In the spirit of ideas worth spreading, TEDx is a program of local, self-organized events that bring people together to share a TED-like experience. At a TEDx event, TEDTalks video and live speakers combine to spark deep discussion and connection in a small group. These local, self-organized events are branded TEDx, where x = independently orga...

    published: 22 Mar 2013
  • 6 Open Source Test Automation Frameworks You Need to Know

    http://www.joecolantonio.com/2016/05/10/6-open-source-test-automation-frameworks-need-know/ Before you fall into the “build your own framework” trap, be sure to check out these six open-source automation solutions. Serenity RobotFramework ReadwoodHQ Sahi Galen Framework Gauge

    published: 21 Jun 2016
  • Probabilistic Machine Learning in TensorFlow

    In this episode of Coffee with a Googler, Laurence Moroney sits down with Josh Dillon. Josh works on TensorFlow, Google’s open source library for numerical computation, which is typically used in Machine Learning and AI applications. He discusses working on the Distribution API, which is based on probabilistic programming. Watch this video to find out what exactly probabilistic programming is, where the use of Distributions and Bijectors comes into play, & how you can get started. Subscribe to our channel to stay up to date with Google Developers. Introducing TensorFlow Probability blog post → https://goo.gl/H3LG8y The code lives at → https://goo.gl/bdwspL Referenced paper → https://goo.gl/HtwJnj Watch more Coffee with a Googler → https://goo.gl/5l123N Subscribe to the Google Developers ...

    published: 13 Apr 2018
  • IOHK | Charles Hoskinson Keynote

    The technology was conceived in an Osaka restaurant more than two years ago and from that small beginning Cardano has been built into a leading cryptocurrency. The project has amassed a team of experts in countries around the world, has generated more than 67,000 lines of code, and has a strong and growing community in countries across Asia and beyond. Along the way, Cardano has set new standards for cryptocurrencies with best practices such as peer review and high assurance methods of software engineering. The official launch was held in the district of Shibuya in Tokyo on Saturday October 14 for an audience of about 500 people, who had each won a ticket through a lottery held on social media. Excited cryptocurrency enthusiasts, Ada holders and business people from across Japan queued to...

    published: 09 Nov 2017
  • iOS 11 ARKit + Vision Framework = ARPaint

    This is how real future of AR combined with Computer Vision may looks like. Amazing project by OSAMA ABDELKARIM ABOULHASSAN with open source code and detailed tutorial: - https://www.toptal.com/swift/ios-arkit-tutorial-drawing-in-air-with-fingers - https://github.com/oabdelkarim/ARPaint

    published: 10 Aug 2017
  • Ethereum Co-Founder Charles Hoskinson Video | ICOs and the Future of Investing

    Start your 14-day free trial on Real Vision. Learn how you can become a great investor: http://rvtv.io/2FnlFw9 Charles Hoskinson, the entrepreneur and mathematician who co-founded ethereum, takes a close look at initial coin offerings, their potential and the attendant risks. Charles also peels back the curtain on creating ethereum, which has risen above $40 billion in market capitalization.

    published: 11 Jan 2018
  • Ron Evans - Putting Eyes on the IoT: Advanced Computer Vision Using Golang

    Global IoT DevFest II November 7-8, 2017 The Global IoT DevFest provides industry thought leaders, innovators, developers, and enthusiasts worldwide a platform to share knowledge, present visions, conduct deep-dive training, and share real-world use cases and solutions. IoT experts and enthusiasts alike will come together in a virtual forum to share their voice, vision and solutions; teach and learn through sessions covering a wide range of topics; get connected through 1:1 mentoring sessions; and showcase cutting-edge IoT research and innovation.

    published: 07 Dec 2017
  • OpenCV Face Detection with Raspberry Pi - Robotics with Python p.7

    Next, we're going to touch on using OpenCV with the Raspberry Pi's camera, giving our robot the gift of sight. There are many steps involved to this process, so there's a lot that is about to be thrown your way. If at any point you're stuck/lost/whatever, feel free to ask questions on the video and I will try to help where possible. There are a lot of moving parts here. If all else fails, I have hosted my Raspberry Pi image: https://drive.google.com/file/d/0B11p78NlrG-vZzdJLWYxcU5iMXM/view?usp=sharing OpenCV stands for Open Computer Vision, and it is an open source computer vision and machine learning library. To start, you will need to get OpenCV on to your Raspberry Pi. http://mitchtech.net/raspberry-pi-opencv/ Keep in mind, the "make" part of this tutorial will take 9-10 hours on a ...

    published: 01 Sep 2015
  • SP1 Real-Time Stereo Vision System

    The SP1 stereo vision system (and also its successor SceneScan) is Nerian Vision Technologies' solution for real-time depth sensing. This stand-alone device connects to two USB industrial cameras, which are set-up in a common stereo alignment. The SP1 processes the images of both cameras in real-time, using a hardware implementation of a state-of-the-art stereo matching algorithm. The computed depth map is transmitted through gigabit ethernet to an attached computer or an embedded system. Stereo vision is a passive approach to depth perception, which allows its use in situations where active 3D sensors fail. This is usually the case in situations with bright ambient light such as outdoors at sunshine. The SP1 provides the user with dense 3D sensory information even in such difficult condi...

    published: 28 Oct 2015
  • OpenVX Workshop for Vision and Neural Network Acceleration - Part I

    Links: https://khr.io/evs2017 The course covers the graph API that enables OpenVX developers to efficiently run computer vision algorithms on heterogeneous computing architectures. A set of example algorithms for feature tracking and neural networks mapped to the graph API will be discussed. Also covered is the relationship between OpenVX and OpenCV, as well as OpenCL. The course includes hands-on practice session that gets the participants started on solving real computer vision problems using OpenVX.

    published: 11 May 2017
  • The KITTI Vision Benchmark Suite

    This benchmark suite was designed to provide challenging realistic datasets to the computer vision community. Our benchmarks currently evaluate stereo, optical flow, visual odometry, 3D object detection and tracking. If you want to contribute results of your method(s), have a look at our evaluation webserver at: http://www.cvlibs.net/datasets/kitti

    published: 14 Mar 2012
  • CNN-SLAM: Real-time dense monocular SLAM with learned depth prediction

    "CNN-SLAM: Real-time dense monocular SLAM with learned depth prediction," K. Tateno, F. Tombari, I. Laina, N. Navab, IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 2017. http://campar.in.tum.de/pub/tateno2017cvpr/tateno2017cvpr.pdf Given the recent advances in depth prediction from Convolutional Neural Networks (CNNs), this paper investigates how predicted depth maps from a deep neural network can be deployed for accurate and dense monocular reconstruction. We propose a method where CNN-predicted dense depth maps are naturally fused together with depth measurements obtained from direct monocular SLAM. Our fusion scheme privileges depth prediction in image locations where monocular SLAM approaches tend to fail, e.g. along low-textured regions, and vice-...

    published: 15 Mar 2017
  • Raspberry Pi Robot Arm With Computer Vision + Image Processing Pics

    The robot arm controller is a Raspberry Pi 2 Model B. The Servomotors are Dynamixel AX-12A. There is a Raspberry Pi camera module mounted on the top for image processing. The Computer Vision algorithms applied here are Edge Detection, Binarization, Pixel Expansion, Labeling and Object Extraction. In this Video I tried to show how the robot see’s the world by adding pictures directly out of the Image Processing algorithms (I just added the coloring in the Labeling process). I also tried to sync the pictures to the superb music of the great artist “broke for free”. Here's some further info on the thing: I didn’t use OpenCV. The image processing algorithms applied here are all very simple. I wanted to write them by my own. Two important libraries which I used are pythons "picamera" and a l...

    published: 08 Dec 2015
  • IOHK | Duncan Coutts, Director of Engineering

    https://iohk.io/team/duncan-coutts/ The technology was conceived in an Osaka restaurant more than two years ago and from that small beginning Cardano has been built into a leading cryptocurrency. The project has amassed a team of experts in countries around the world, has generated more than 67,000 lines of code, and has a strong and growing community in countries across Asia and beyond. Along the way, Cardano has set new standards for cryptocurrencies with best practices such as peer review and high assurance methods of software engineering. The official launch was held in the district of Shibuya in Tokyo on Saturday October 14 for an audience of about 500 people, who had each won a ticket through a lottery held on social media. Excited cryptocurrency enthusiasts, Ada holders and busine...

    published: 09 Nov 2017
  • Hello World - Machine Learning Recipes #1

    Six lines of Python is all it takes to write your first machine learning program! In this episode, we'll briefly introduce what machine learning is and why it's important. Then, we'll follow a recipe for supervised learning (a technique to create a classifier from examples) and code it up. Follow https://twitter.com/random_forests for updates on new episodes! Subscribe to the Google Developers: http://goo.gl/mQyv5L - Subscribe to the brand new Firebase Channel: https://goo.gl/9giPHG And here's our playlist: https://goo.gl/KewA03

    published: 30 Mar 2016
  • Loitor vision sensing Inertia Camera open-source project

    http://www.lodetc.com presents Loitor Inertia camera project, an Intel Realsense competitor where the price can be less than $72 for the whole module. Loitor is a 3d camera system opensource that can be packaged to run with an allwinner A83 processor, the motherboard also has a Cypress CY68013 and an STMicroelectronics STM-32 system on board. The camera can recognize 3D space for indoor and outdoor mapping. Project developers can contact Loitor here: kang791208@aliyun.com skype: kang791208@gmail.com Mobile: +86 13582312223

    published: 08 Apr 2017
  • Bill Gates interview: How the world will change by 2030

    The Verge sat down with Bill Gates to talk about his ambitious vision for improving the lives of the poor through technology. It just so happens that The Verge exists to explore that kind of change — which is why Bill Gates will be The Verge’s first ever guest editor in February. Subscribe: http://goo.gl/G5RXGs Read more: http://theverge.com/e/7634538 Check out our full video catalog: http://goo.gl/lfcGfq Visit our playlists: http://goo.gl/94XbKx Like The Verge on Facebook: http://goo.gl/2P1aGc Follow on Twitter: http://goo.gl/XTWX61 Follow on Instagram: http://goo.gl/7ZeLvX Read More: http://www.theverge.com

    published: 22 Jan 2015
  • FarmBot: open source backyard robot for a fully automated garden

    In the front yard of Rory Aronson’s San Luis Obispo home (that he shares with 9 roommates), a robot is tending his garden- seeding, watering, weeding and testing the soil- while he controls it from his his phone. FarmBot is what he calls “humanity's open-source automated precision farming machine”. https://farmbot.io/ As a student at Cal Poly San Luis Obispo he was inspired by a guest lecture in his organic agriculture class, “when a traditional farmer came in talking about some of the tractor technology he’s using on his farm and I looked at that and said, ‘Wait a minute, I can do that better’, explains Aronson. “The first thing that I thought of when I thought of the idea was, ‘Oh this probably exists let me go look it up’ and I scoured the Internet. I was amazed actually, that there wa...

    published: 25 Sep 2016
developed with YouTube
Keynote: Open Source Networking and a Vision of Fully Automated Networks - Arpit Joshipura
18:03

Keynote: Open Source Networking and a Vision of Fully Automated Networks - Arpit Joshipura

  • Order:
  • Duration: 18:03
  • Updated: 25 Oct 2017
  • views: 641
videos
Keynote: Open Source Networking and a Vision of Fully Automated Networks - Arpit Joshipura, General Manager, Networking, The Linux Foundation  A disruption in 140+ year old telecom industry is making networking cool again with SDN/NFV, 5G, IOT, and AI at the heart of network automation. This talk will focus on how Carriers, Enterprises and Cloud Service providers are bracing for a shift from proprietary to open source; and how the Linux Foundation is in the middle of this with projects like ONAP, ODL, OPNFV and more. About Arpit Joshipura Arpit brings over 25 years of networking expertise and vision to The Linux Foundation with technical depth and business breadth. He has instrumented and led major industry disruptions across Enterprises, Carriers and Cloud architectures including IP, Broadband, Optical, Mobile, Routing, Switching, L4-7, Cloud, Disaggregation, SDN/NFV, Open Networking and has been an early evangelist for open source. Arpit has served as CMO/VP in startups and larger enterprises including Prevoty, Dell/Force10, Ericsson/Redback, ONI/CIENA and BNR/Nortel leading strategy, product management, marketing, engineering and technology standards functions.
https://wn.com/Keynote_Open_Source_Networking_And_A_Vision_Of_Fully_Automated_Networks_Arpit_Joshipura
Open Source TensorFlow Models (Google I/O '17)
33:37

Open Source TensorFlow Models (Google I/O '17)

  • Order:
  • Duration: 33:37
  • Updated: 18 May 2017
  • views: 54081
videos
Come to this talk for a tour of the latest open source TensorFlow models for Image Classification, Natural Language Processing, and Computer Generated Artwork. Along the way, Josh Gordon will share thoughts on Deep Learning, open source research, and educational resources you can use to learn more. See all the talks from Google I/O '17 here: https://goo.gl/D0D4VE Subscribe to the Google Developers channel: http://goo.gl/mQyv5L Follow Josh on Twitter: https://twitter.com/random_forests #io17 #GoogleIO #GoogleIO2017
https://wn.com/Open_Source_Tensorflow_Models_(Google_I_O_'17)
How computers learn to recognize objects instantly | Joseph Redmon
7:38

How computers learn to recognize objects instantly | Joseph Redmon

  • Order:
  • Duration: 7:38
  • Updated: 18 Aug 2017
  • views: 246678
videos
Ten years ago, researchers thought that getting a computer to tell the difference between a cat and a dog would be almost impossible. Today, computer vision systems do it with greater than 99 percent accuracy. How? Joseph Redmon works on the YOLO (You Only Look Once) system, an open-source method of object detection that can identify objects in images and video -- from zebras to stop signs -- with lightning-quick speed. In a remarkable live demo, Redmon shows off this important step forward for applications like self-driving cars, robotics and even cancer detection. Check out more TED talks: http://www.ted.com The TED Talks channel features the best talks and performances from the TED Conference, where the world's leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design -- plus science, business, global issues, the arts and more. Follow TED on Twitter: http://www.twitter.com/TEDTalks Like TED on Facebook: https://www.facebook.com/TED Subscribe to our channel: https://www.youtube.com/TED
https://wn.com/How_Computers_Learn_To_Recognize_Objects_Instantly_|_Joseph_Redmon
OSN Days 2017: Open Source Networking
45:36

OSN Days 2017: Open Source Networking

  • Order:
  • Duration: 45:36
  • Updated: 09 Nov 2017
  • views: 758
videos
Phil Robb, VP Operations, Networking & Orchestration, The Linux Foundation.
https://wn.com/Osn_Days_2017_Open_Source_Networking
Richard Baraniuk on open-source learning
19:20

Richard Baraniuk on open-source learning

  • Order:
  • Duration: 19:20
  • Updated: 12 Jan 2007
  • views: 94476
videos
http://www.ted.com Rice University professor Richard Baraniuk explains the vision behind Connexions, his open-source, online education system. It cuts out the textbook, allowing teachers to share and modify course materials freely, anywhere in the world. TEDTalks is a daily video podcast of the best talks and performances from the TED Conference, where the world's leading thinkers and doers are invited to give the talk of their lives in 18 minutes. TED stands for Technology, Entertainment, and Design, and TEDTalks cover these topics as well as science, business, politics and the arts. Watch the Top 10 TEDTalks on TED.com, at http://www.ted.com/index.php/talks/top10 Follow us on Twitter http://www.twitter.com/tednews Checkout our Facebook page for TED exclusives https://www.facebook.com/TED
https://wn.com/Richard_Baraniuk_On_Open_Source_Learning
Beth Noveck: Demand a more open-source government
17:24

Beth Noveck: Demand a more open-source government

  • Order:
  • Duration: 17:24
  • Updated: 27 Sep 2012
  • views: 20277
videos
What can governments learn from the open-data revolution? In this stirring talk, Beth Noveck, the former deputy CTO at the White House, shares a vision of practical openness -- connecting bureaucracies to citizens, sharing data, creating a truly participatory democracy. Imagine the "writable society" ... TEDTalks is a daily video podcast of the best talks and performances from the TED Conference, where the world's leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design -- plus science, business, global issues, the arts and much more. Find closed captions and translated subtitles in many languages at http://www.ted.com/translate Follow TED news on Twitter: http://www.twitter.com/tednews Like TED on Facebook: https://www.facebook.com/TED Subscribe to our channel: http://www.youtube.com/user/TEDtalksDirector
https://wn.com/Beth_Noveck_Demand_A_More_Open_Source_Government
Open Source Learning: David Preston at TEDxUCLA
10:37

Open Source Learning: David Preston at TEDxUCLA

  • Order:
  • Duration: 10:37
  • Updated: 22 Mar 2013
  • views: 3710
videos
David Preston holds a Ph.D. in Education Policy from the UCLA Graduate School of Education & Information Science. He has taught at universities and graduate institutes and consulted on matters of learning and organizational development for 20 years. For the past seven years, David has also taught English for students of all ability levels in grades 9-12 in Los Angeles and on California's central coast. About TEDx, x = independently organized event In the spirit of ideas worth spreading, TEDx is a program of local, self-organized events that bring people together to share a TED-like experience. At a TEDx event, TEDTalks video and live speakers combine to spark deep discussion and connection in a small group. These local, self-organized events are branded TEDx, where x = independently organized TED event. The TED Conference provides general guidance for the TEDx program, but individual TEDx events are self-organized.* (*Subject to certain rules and regulations)
https://wn.com/Open_Source_Learning_David_Preston_At_Tedxucla
6 Open Source Test Automation Frameworks You Need to Know
5:53

6 Open Source Test Automation Frameworks You Need to Know

  • Order:
  • Duration: 5:53
  • Updated: 21 Jun 2016
  • views: 27708
videos
http://www.joecolantonio.com/2016/05/10/6-open-source-test-automation-frameworks-need-know/ Before you fall into the “build your own framework” trap, be sure to check out these six open-source automation solutions. Serenity RobotFramework ReadwoodHQ Sahi Galen Framework Gauge
https://wn.com/6_Open_Source_Test_Automation_Frameworks_You_Need_To_Know
Probabilistic Machine Learning in TensorFlow
9:43

Probabilistic Machine Learning in TensorFlow

  • Order:
  • Duration: 9:43
  • Updated: 13 Apr 2018
  • views: 702
videos
In this episode of Coffee with a Googler, Laurence Moroney sits down with Josh Dillon. Josh works on TensorFlow, Google’s open source library for numerical computation, which is typically used in Machine Learning and AI applications. He discusses working on the Distribution API, which is based on probabilistic programming. Watch this video to find out what exactly probabilistic programming is, where the use of Distributions and Bijectors comes into play, & how you can get started. Subscribe to our channel to stay up to date with Google Developers. Introducing TensorFlow Probability blog post → https://goo.gl/H3LG8y The code lives at → https://goo.gl/bdwspL Referenced paper → https://goo.gl/HtwJnj Watch more Coffee with a Googler → https://goo.gl/5l123N Subscribe to the Google Developers Channelhttp://goo.gl/mQyv5L
https://wn.com/Probabilistic_Machine_Learning_In_Tensorflow
IOHK | Charles Hoskinson Keynote
54:56

IOHK | Charles Hoskinson Keynote

  • Order:
  • Duration: 54:56
  • Updated: 09 Nov 2017
  • views: 5084
videos
The technology was conceived in an Osaka restaurant more than two years ago and from that small beginning Cardano has been built into a leading cryptocurrency. The project has amassed a team of experts in countries around the world, has generated more than 67,000 lines of code, and has a strong and growing community in countries across Asia and beyond. Along the way, Cardano has set new standards for cryptocurrencies with best practices such as peer review and high assurance methods of software engineering. The official launch was held in the district of Shibuya in Tokyo on Saturday October 14 for an audience of about 500 people, who had each won a ticket through a lottery held on social media. Excited cryptocurrency enthusiasts, Ada holders and business people from across Japan queued to get Cardano t-shirts and souvenir physical Ada coins, before going into the main hall to hear about how Cardano was created and the vision for its future. “The first thing we did when we knew the project was real was to build great partnerships,” Charles Hoskinson, founder and CEO of IOHK, told the audience. “Our chief scientist is based at University of Edinburgh, it is a wonderful place, where they built the heart of Cardano. We have a lot of wonderful people at the University of Athens, they are rigorous, making sure that the theory works. And we have people at Tokyo Tech who work on multi party computation and look to the future, and work out how to make Cardano last a long time.” The vision for Cardano, Hoskinson said, was that it would pull together academic research and bright ideas from computer science to produce a cryptocurrency capable of much more than its predecessors. This “third generation” cryptocurrency would be able to scale to a billion users, using a proof of stake algorithm, Ouroboros, which avoided the huge energy consumption of proof of work cryptocurrencies. Features that would be added to Cardano to help it scale included sidechains, trusted hardware, and RINA, or recursive internetwork architecture. Sustainability would be part of the design by way of a treasury system to fund development indefinitely, allowing stakeholders to vote on proposed changes to the protocol. Meanwhile, the computation layer of the technology, would be innovative in using a tool called K Framework to allow developers to write smart contracts in the programming language of their choice, he said. Security is paramount to cryptocurrency because flaws in code increase the risk of hacks and the loss of coin holder funds, unfortunately witnessed too often. With that in mind, Duncan Coutts, head of engineering at IOHK, explained how the company approaches software development: cryptography research papers are translated into code using the technique of formal specification. This involves a series of mathematical steps that progressively take the cryptography closer to the code that the developers write, a process that allows checks to be made that the specifications are indeed correct. After the presentation crowds formed outside the hall to have their photos taken with the Cardano team. Some people who came along were longstanding supporters of the project, such as Naomi Nisiguchi, from Mie Prefecture. She works as a manager in the construction industry and has had an interest in cryptocurrency for four years. “Around two years ago I heard about Ada and that Charles Hoskinson was involved,” she said. “I’ve been following the news on Facebook and I’m very interested to learn how the project will move on.” -- The Cardano Portfolio The Cardano Hub the source for all things Cardano https://www.cardanohub.org/en/home/ Cardano Blockchain Explorer An open source block explorer for the Cardano project https://cardanoexplorer.com Cardano Documentation Full technical documentation of the project https://cardanodocs.com Cardano Roadmap Development path of the Cardano project https://cardanoroadmap.com Why Cardano The philosophy behind the project https://whycardano.com Daedalus Platform Open source platform https://daedaluswallet.io The Cardano Foundation Supervisory and educational body for the Cardano Protocol https://cardanofoundation.org Cardano Foundation YouTube All the latest videos & tutorials https://www.youtube.com/channel/UCbQ9... Cardano Foundation Follow the Foundation https://twitter.com/CardanoStiftung Cardano Slack Join the conversation https://cardano.herokuapp.com Cardano reddit Join the conversation https://www.reddit.com/r/cardano/ IOHK Development partner https://iohk.io IOHK blog Read about the latest technology advancements https://iohk.io/blog/ —
https://wn.com/Iohk_|_Charles_Hoskinson_Keynote
iOS 11 ARKit + Vision Framework = ARPaint
1:01

iOS 11 ARKit + Vision Framework = ARPaint

  • Order:
  • Duration: 1:01
  • Updated: 10 Aug 2017
  • views: 1560
videos
This is how real future of AR combined with Computer Vision may looks like. Amazing project by OSAMA ABDELKARIM ABOULHASSAN with open source code and detailed tutorial: - https://www.toptal.com/swift/ios-arkit-tutorial-drawing-in-air-with-fingers - https://github.com/oabdelkarim/ARPaint
https://wn.com/Ios_11_Arkit_Vision_Framework_Arpaint
Ethereum Co-Founder Charles Hoskinson Video | ICOs and the Future of Investing
31:51

Ethereum Co-Founder Charles Hoskinson Video | ICOs and the Future of Investing

  • Order:
  • Duration: 31:51
  • Updated: 11 Jan 2018
  • views: 2761
videos
Start your 14-day free trial on Real Vision. Learn how you can become a great investor: http://rvtv.io/2FnlFw9 Charles Hoskinson, the entrepreneur and mathematician who co-founded ethereum, takes a close look at initial coin offerings, their potential and the attendant risks. Charles also peels back the curtain on creating ethereum, which has risen above $40 billion in market capitalization.
https://wn.com/Ethereum_Co_Founder_Charles_Hoskinson_Video_|_Icos_And_The_Future_Of_Investing
Ron Evans - Putting Eyes on the IoT: Advanced Computer Vision Using Golang
55:50

Ron Evans - Putting Eyes on the IoT: Advanced Computer Vision Using Golang

  • Order:
  • Duration: 55:50
  • Updated: 07 Dec 2017
  • views: 244
videos
Global IoT DevFest II November 7-8, 2017 The Global IoT DevFest provides industry thought leaders, innovators, developers, and enthusiasts worldwide a platform to share knowledge, present visions, conduct deep-dive training, and share real-world use cases and solutions. IoT experts and enthusiasts alike will come together in a virtual forum to share their voice, vision and solutions; teach and learn through sessions covering a wide range of topics; get connected through 1:1 mentoring sessions; and showcase cutting-edge IoT research and innovation.
https://wn.com/Ron_Evans_Putting_Eyes_On_The_Iot_Advanced_Computer_Vision_Using_Golang
OpenCV Face Detection with Raspberry Pi - Robotics with Python p.7
22:09

OpenCV Face Detection with Raspberry Pi - Robotics with Python p.7

  • Order:
  • Duration: 22:09
  • Updated: 01 Sep 2015
  • views: 235017
videos
Next, we're going to touch on using OpenCV with the Raspberry Pi's camera, giving our robot the gift of sight. There are many steps involved to this process, so there's a lot that is about to be thrown your way. If at any point you're stuck/lost/whatever, feel free to ask questions on the video and I will try to help where possible. There are a lot of moving parts here. If all else fails, I have hosted my Raspberry Pi image: https://drive.google.com/file/d/0B11p78NlrG-vZzdJLWYxcU5iMXM/view?usp=sharing OpenCV stands for Open Computer Vision, and it is an open source computer vision and machine learning library. To start, you will need to get OpenCV on to your Raspberry Pi. http://mitchtech.net/raspberry-pi-opencv/ Keep in mind, the "make" part of this tutorial will take 9-10 hours on a Raspberry Pi Model B+. The Raspberry Pi 2 will do it in more like 2-4 hours. Either way, it will take a while. I just did it overnight one night. Text-based version and sample code: http://pythonprogramming.net/raspberry-pi-camera-opencv-face-detection-tutorial/ http://pythonprogramming.net https://twitter.com/sentdex
https://wn.com/Opencv_Face_Detection_With_Raspberry_Pi_Robotics_With_Python_P.7
SP1 Real-Time Stereo Vision System
2:26

SP1 Real-Time Stereo Vision System

  • Order:
  • Duration: 2:26
  • Updated: 28 Oct 2015
  • views: 17055
videos
The SP1 stereo vision system (and also its successor SceneScan) is Nerian Vision Technologies' solution for real-time depth sensing. This stand-alone device connects to two USB industrial cameras, which are set-up in a common stereo alignment. The SP1 processes the images of both cameras in real-time, using a hardware implementation of a state-of-the-art stereo matching algorithm. The computed depth map is transmitted through gigabit ethernet to an attached computer or an embedded system. Stereo vision is a passive approach to depth perception, which allows its use in situations where active 3D sensors fail. This is usually the case in situations with bright ambient light such as outdoors at sunshine. The SP1 provides the user with dense 3D sensory information even in such difficult conditions. For industries where changing and difficult lighting conditions are common, such as in the logistics sectors, the SP1 provides a viable solution for acquiring 3D sensory information. Another key advantage of the SP1 is its versatility. Other 3D sensor systems are designed for observing one pre-defined volume. For industrial plants that process products of very different sizes, or for customers which need to cover an unusual measurement range, available 3D sensors might not be sufficient. The SP1 allows the user to control the covered volume through the choice of camera distance and optics. The user also has a choice over the image sensor, which he can pick according his requirements. The camera setup can be adjusted at any point if the application requirements change. This makes camera calibration critical. Many existing stereo cameras are shipped pre-calibrated and do not allow for a re-calibration by the user. The SP1, on the other hand, provides an easy user interface that facilitates a re-calibration within minutes. Hence, depth measuring can be resumed shortly after the camera setup has been adjusted. Find out more at: https://nerian.com/products/scenescan-stereo-vision/
https://wn.com/Sp1_Real_Time_Stereo_Vision_System
OpenVX Workshop for Vision and Neural Network Acceleration - Part I
1:36:20

OpenVX Workshop for Vision and Neural Network Acceleration - Part I

  • Order:
  • Duration: 1:36:20
  • Updated: 11 May 2017
  • views: 1600
videos
Links: https://khr.io/evs2017 The course covers the graph API that enables OpenVX developers to efficiently run computer vision algorithms on heterogeneous computing architectures. A set of example algorithms for feature tracking and neural networks mapped to the graph API will be discussed. Also covered is the relationship between OpenVX and OpenCV, as well as OpenCL. The course includes hands-on practice session that gets the participants started on solving real computer vision problems using OpenVX.
https://wn.com/Openvx_Workshop_For_Vision_And_Neural_Network_Acceleration_Part_I
The KITTI Vision Benchmark Suite
4:56

The KITTI Vision Benchmark Suite

  • Order:
  • Duration: 4:56
  • Updated: 14 Mar 2012
  • views: 24260
videos
This benchmark suite was designed to provide challenging realistic datasets to the computer vision community. Our benchmarks currently evaluate stereo, optical flow, visual odometry, 3D object detection and tracking. If you want to contribute results of your method(s), have a look at our evaluation webserver at: http://www.cvlibs.net/datasets/kitti
https://wn.com/The_Kitti_Vision_Benchmark_Suite
CNN-SLAM: Real-time dense monocular SLAM with learned depth prediction
2:09

CNN-SLAM: Real-time dense monocular SLAM with learned depth prediction

  • Order:
  • Duration: 2:09
  • Updated: 15 Mar 2017
  • views: 34520
videos
"CNN-SLAM: Real-time dense monocular SLAM with learned depth prediction," K. Tateno, F. Tombari, I. Laina, N. Navab, IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 2017. http://campar.in.tum.de/pub/tateno2017cvpr/tateno2017cvpr.pdf Given the recent advances in depth prediction from Convolutional Neural Networks (CNNs), this paper investigates how predicted depth maps from a deep neural network can be deployed for accurate and dense monocular reconstruction. We propose a method where CNN-predicted dense depth maps are naturally fused together with depth measurements obtained from direct monocular SLAM. Our fusion scheme privileges depth prediction in image locations where monocular SLAM approaches tend to fail, e.g. along low-textured regions, and vice-versa. We demonstrate the use of depth prediction for estimating the absolute scale of the reconstruction, hence overcoming one of the major limitations of monocular SLAM. Finally, we propose a framework to efficiently fuse semantic labels, obtained from a single frame, with dense SLAM, yielding semantically coherent scene reconstruction from a single view. Evaluation results on two benchmark datasets show the robustness and accuracy of our approach.
https://wn.com/Cnn_Slam_Real_Time_Dense_Monocular_Slam_With_Learned_Depth_Prediction
Raspberry Pi Robot Arm With Computer Vision + Image Processing Pics
3:10

Raspberry Pi Robot Arm With Computer Vision + Image Processing Pics

  • Order:
  • Duration: 3:10
  • Updated: 08 Dec 2015
  • views: 74936
videos
The robot arm controller is a Raspberry Pi 2 Model B. The Servomotors are Dynamixel AX-12A. There is a Raspberry Pi camera module mounted on the top for image processing. The Computer Vision algorithms applied here are Edge Detection, Binarization, Pixel Expansion, Labeling and Object Extraction. In this Video I tried to show how the robot see’s the world by adding pictures directly out of the Image Processing algorithms (I just added the coloring in the Labeling process). I also tried to sync the pictures to the superb music of the great artist “broke for free”. Here's some further info on the thing: I didn’t use OpenCV. The image processing algorithms applied here are all very simple. I wanted to write them by my own. Two important libraries which I used are pythons "picamera" and a library called "ax12". "picamera" provides an easy way to get greyscale pixeldata from the Raspberry Pi camera module. "ax12" is used for the communication with the Dynamixel AX-12A servos. I did write some code to make the servomotors move smoother (starting and stopping in a smooth sinusoidal manner). And then there was a bit of code to actually get the junctions into positions, which would allow the electromagnet to pick up the metallic things. In other words, this was about getting the thing to move correctly given some x and y values which were extracted from the image earlier. My Blog about the thing: https://electrondust.com/2017/10/28/raspberry-pi-robot-arm-with-simple-computer-vision/ Sourcecode: https://github.com/T-Kuhn/ScrewPicker Music: "Night Owl" by Broke For Free http://www.brokeforfree.com
https://wn.com/Raspberry_Pi_Robot_Arm_With_Computer_Vision_Image_Processing_Pics
IOHK | Duncan Coutts, Director of Engineering
28:29

IOHK | Duncan Coutts, Director of Engineering

  • Order:
  • Duration: 28:29
  • Updated: 09 Nov 2017
  • views: 1449
videos
https://iohk.io/team/duncan-coutts/ The technology was conceived in an Osaka restaurant more than two years ago and from that small beginning Cardano has been built into a leading cryptocurrency. The project has amassed a team of experts in countries around the world, has generated more than 67,000 lines of code, and has a strong and growing community in countries across Asia and beyond. Along the way, Cardano has set new standards for cryptocurrencies with best practices such as peer review and high assurance methods of software engineering. The official launch was held in the district of Shibuya in Tokyo on Saturday October 14 for an audience of about 500 people, who had each won a ticket through a lottery held on social media. Excited cryptocurrency enthusiasts, Ada holders and business people from across Japan queued to get Cardano t-shirts and souvenir physical Ada coins, before going into the main hall to hear about how Cardano was created and the vision for its future. “The first thing we did when we knew the project was real was to build great partnerships,” Charles Hoskinson, founder and CEO of IOHK, told the audience. “Our chief scientist is based at University of Edinburgh, it is a wonderful place, where they built the heart of Cardano. We have a lot of wonderful people at the University of Athens, they are rigorous, making sure that the theory works. And we have people at Tokyo Tech who work on multi party computation and look to the future, and work out how to make Cardano last a long time.” The vision for Cardano, Hoskinson said, was that it would pull together academic research and bright ideas from computer science to produce a cryptocurrency capable of much more than its predecessors. This “third generation” cryptocurrency would be able to scale to a billion users, using a proof of stake algorithm, Ouroboros, which avoided the huge energy consumption of proof of work cryptocurrencies. Features that would be added to Cardano to help it scale included sidechains, trusted hardware, and RINA, or recursive internetwork architecture. Sustainability would be part of the design by way of a treasury system to fund development indefinitely, allowing stakeholders to vote on proposed changes to the protocol. Meanwhile, the computation layer of the technology, would be innovative in using a tool called K Framework to allow developers to write smart contracts in the programming language of their choice, he said. Security is paramount to cryptocurrency because flaws in code increase the risk of hacks and the loss of coin holder funds, unfortunately witnessed too often. With that in mind, Duncan Coutts, head of engineering at IOHK, explained how the company approaches software development: cryptography research papers are translated into code using the technique of formal specification. This involves a series of mathematical steps that progressively take the cryptography closer to the code that the developers write, a process that allows checks to be made that the specifications are indeed correct. After the presentation crowds formed outside the hall to have their photos taken with the Cardano team. Some people who came along were longstanding supporters of the project, such as Naomi Nisiguchi, from Mie Prefecture. She works as a manager in the construction industry and has had an interest in cryptocurrency for four years. “Around two years ago I heard about Ada and that Charles Hoskinson was involved,” she said. “I’ve been following the news on Facebook and I’m very interested to learn how the project will move on.” -- The Cardano Portfolio The Cardano Hub the source for all things Cardano https://www.cardanohub.org/en/home/ Cardano Blockchain Explorer An open source block explorer for the Cardano project https://cardanoexplorer.com Cardano Documentation Full technical documentation of the project https://cardanodocs.com Cardano Roadmap Development path of the Cardano project https://cardanoroadmap.com Why Cardano The philosophy behind the project https://whycardano.com Daedalus Platform Open source platform https://daedaluswallet.io The Cardano Foundation Supervisory and educational body for the Cardano Protocol https://cardanofoundation.org Cardano Foundation YouTube All the latest videos & tutorials https://www.youtube.com/channel/UCbQ9... Cardano Foundation Follow the Foundation https://twitter.com/CardanoStiftung Cardano Slack Join the conversation https://cardano.herokuapp.com Cardano reddit Join the conversation https://www.reddit.com/r/cardano/ IOHK Development partner https://iohk.io IOHK blog Read about the latest technology advancements https://iohk.io/blog/ —
https://wn.com/Iohk_|_Duncan_Coutts,_Director_Of_Engineering
Hello World - Machine Learning Recipes #1
6:53

Hello World - Machine Learning Recipes #1

  • Order:
  • Duration: 6:53
  • Updated: 30 Mar 2016
  • views: 1307665
videos
Six lines of Python is all it takes to write your first machine learning program! In this episode, we'll briefly introduce what machine learning is and why it's important. Then, we'll follow a recipe for supervised learning (a technique to create a classifier from examples) and code it up. Follow https://twitter.com/random_forests for updates on new episodes! Subscribe to the Google Developers: http://goo.gl/mQyv5L - Subscribe to the brand new Firebase Channel: https://goo.gl/9giPHG And here's our playlist: https://goo.gl/KewA03
https://wn.com/Hello_World_Machine_Learning_Recipes_1
Loitor vision sensing Inertia Camera open-source project
4:19

Loitor vision sensing Inertia Camera open-source project

  • Order:
  • Duration: 4:19
  • Updated: 08 Apr 2017
  • views: 1342
videos
http://www.lodetc.com presents Loitor Inertia camera project, an Intel Realsense competitor where the price can be less than $72 for the whole module. Loitor is a 3d camera system opensource that can be packaged to run with an allwinner A83 processor, the motherboard also has a Cypress CY68013 and an STMicroelectronics STM-32 system on board. The camera can recognize 3D space for indoor and outdoor mapping. Project developers can contact Loitor here: kang791208@aliyun.com skype: kang791208@gmail.com Mobile: +86 13582312223
https://wn.com/Loitor_Vision_Sensing_Inertia_Camera_Open_Source_Project
Bill Gates interview: How the world will change by 2030
17:51

Bill Gates interview: How the world will change by 2030

  • Order:
  • Duration: 17:51
  • Updated: 22 Jan 2015
  • views: 2498321
videos
The Verge sat down with Bill Gates to talk about his ambitious vision for improving the lives of the poor through technology. It just so happens that The Verge exists to explore that kind of change — which is why Bill Gates will be The Verge’s first ever guest editor in February. Subscribe: http://goo.gl/G5RXGs Read more: http://theverge.com/e/7634538 Check out our full video catalog: http://goo.gl/lfcGfq Visit our playlists: http://goo.gl/94XbKx Like The Verge on Facebook: http://goo.gl/2P1aGc Follow on Twitter: http://goo.gl/XTWX61 Follow on Instagram: http://goo.gl/7ZeLvX Read More: http://www.theverge.com
https://wn.com/Bill_Gates_Interview_How_The_World_Will_Change_By_2030
FarmBot: open source backyard robot for a fully automated garden
31:44

FarmBot: open source backyard robot for a fully automated garden

  • Order:
  • Duration: 31:44
  • Updated: 25 Sep 2016
  • views: 285893
videos
In the front yard of Rory Aronson’s San Luis Obispo home (that he shares with 9 roommates), a robot is tending his garden- seeding, watering, weeding and testing the soil- while he controls it from his his phone. FarmBot is what he calls “humanity's open-source automated precision farming machine”. https://farmbot.io/ As a student at Cal Poly San Luis Obispo he was inspired by a guest lecture in his organic agriculture class, “when a traditional farmer came in talking about some of the tractor technology he’s using on his farm and I looked at that and said, ‘Wait a minute, I can do that better’, explains Aronson. “The first thing that I thought of when I thought of the idea was, ‘Oh this probably exists let me go look it up’ and I scoured the Internet. I was amazed actually, that there was not a CNC-type farming equipment already existing so I said, well, I guess it’s up to me.” During the summer after graduation Aronson wrote a white paper to outline his ideas and within days he had the attention of “software developers, open-source enthusiasts, ag specialists, mechanical engineers, and more”. After several years of iterations and a crowdfunding campaign that has raised over a million dollars, the FarmBot team (Rory and programmers based worldwide) will release the FarmBot Genesis in early 2017. Using an Arduino and Raspberry Pi, FarmBots are “giant 3D printers, but instead of extruding plastic, its tools are seed injectors, watering nozzles, sensors, and more.” If you want to print your own, the specs are all free and open source, but if you’d rather buy an all-inclusive kit, it will cost you $2900, a number Aronson says will come down with time. He sees it as a long-term investment. “Because it’s so based in software, all of the functions, it will get better over time so even if you bought a kit today the hardware won’t change, but the software will allow it to do more and more things over time”. “My long-term vision for FarmBot is that it’s a home appliance,” explains Aronson. “Just like everyone has a refrigerator and a washing machine and a drier maybe you have a Farmbot too and in the backyard doing it’s thing and it’s like a utility that you use. You turn on the water on your faucet and water comes out, you go out into your backyard and there’s food that’s been grown for you.” Original story: https://faircompanies.com/videos/open-source-bot-plants-maintains-your-garden-when-you-cant/
https://wn.com/Farmbot_Open_Source_Backyard_Robot_For_A_Fully_Automated_Garden