Controlling a UR3 robot with gestures over a network

Hello everyone!

Thankfully, it didn’t take me a year to write a 2nd post! This time it’s about a part of a project I worked on the last 2-3 months (in parallel to my PhD studies of course :P). The topic is again teleoperation, this time only using hardware (last post was about using a virtual environment) such as the UR3 robotic arm below (looks great doesn’t it?).


The project I contributed to was demonstrated in Mobile World Congress 2017 in Barcelona at one of the booths of Ericsson. Why Ericsson? Because King’s College London (where I study) and Ericsson collaborate on standardizing 5G. I must say it was a pleasure working with everyone who participated.

Needless to say it was a really tiring week as the event hit a record of 108,000 visitors, but what an experience it was…just epic! I also had the chance to meet and discuss with many interesting and amazing people. For more information there is a CNET article with a bit of a demonstration as well 🙂

Anyway…with the amount of time I had available to learn how to use ROS etc and make the robot move, I could only create a gesture system that receives position commands from a client. The positions the robot could move were pre-defined. I wish I had more time to make a direct control application (with speed and range of motion limiters of course).

The juice

You can download the Python script here. So…to make the robot move we used Linux (Ubuntu 14.04 specifically) with ROS. I said “we” because ROS was not used only for making the robot move. Anyhow, to make the app run you need to:

  1. Create a catkin workspace using ROS.
  2. Download the ROS-Industrial universal robot meta-package.
  3. Download ur_modern_driver and put everything inside the workspace’s src folder.
  4. Compile with catkin_make (yes, you will probably need to install many ROS dependencies).
  5. And then open a terminal to launch ROS with:

    $ source path/to/workspace/devel/

    $ roslaunch ur_bringup ur3_bringup.launch (apply the correct robot IP)

  6. Open another terminal to run the application:

    $ source path/to/workspace/devel/

    $ rosrun ur_modern_driver


Again, as with my previous post…not sure if this is helpful to anyone but it’s good to have it documented somewhere 🙂

A simple demo on the impact of latency in teleoperation

Hello everyone!

It’s been so long since I wrote a blog post. You can’t imagine how much I’ve been waiting for a reason to start writing and…here it is!

Since the last time I wrote something, I finished with my MSc degree at NKUA and started my PhD at King’s College London. I am now studying and working on haptics over the upcoming 5G network infrastructure. How they work, how to improve it and how to make it more usable for a number of use cases are a few questions I’m looking to answer.

The demo

I’m happy to say that I have just uploaded the first demo I ever made since I joined the KCL-Ericsson 5G lab. It’s rather simple but gets the job done. It also doesn’t fail to impress people who don’t know how it’s like to teleoperate something under latency.

So, here it is:

It’s actually a modified version of one of the examples provided by the Chai3D C++ framework that I’ve been using lately. The modification was simply creating one buffer at the position channel and another one at the feedback channel. As you increase the latency (ie. the size of the buffers) above 10ms, the end result is the de-synchronization of the data you receive and the data you send making the haptic device to become unstable…it really starts to “kick” when you touch anything with the grey ball.

The haptic device used is the Sensable Phantom Omni (IEEE 1384 version) which works only under Windows (at least for me). So, in case anyone has made it work under Linux, if possible, please send over a how to 🙂

There is room for improvements, further modifications and optimizations. One idea is to implement at least one stability control algorithm to compare it to the usage without one.

Anyway, here is a pic from the application.

Application screenshot

You can slide the cylinder along the string using the grey ball which you control from the haptic device. You can change the latency (bottom right) with + and – keys.

Change a Font Awesome icon on hover (using content) + Sopler news!

Hi everyone! The past few days, we made some major updates on Sopler that we started designing a long time ago.

It is now possible to set a due date or edit your items using a brand new options menu. Also, when you enter a YouTube link, a (auto-scalable) player will appear on the list! 🙂

Nonetheless, this post concerns changing a Font Awesome icon to another Font Awesome icon when the first one is on hover.

Firstly, I came across this post and a few (unrelated but helpful) answers on Stack Overflow that used the content property. Then, I thought that this might work pretty well and it did.

For example,

<div class="divclass">
  <i class="fa fa-circle-o"></i>

using this CSS:


.divclass:hover .fa-circle-o:before{

OK, the div element will be a full-width rectangle (use your Developer Toolbar to check what’s going on), but you can modify it later. Anyway, the result is:

It might be trivial but it’s also a lot easier than other implementations I’ve seen so far.

An implementation of a person (and object) re-identification method

Hi! I would like to present you my latest upload on GitHub:

It’s the implementation of a research publication on human re-identification [1] in C++ (…with a very minimal OO design though).

This programme was created for academic purposes (!) and it can most probably also be used to re-identify other (similarly distinctive) objects as well, although this has not been tested. It divides the image into 3×4 blocks and uses very simple features (HSV values, first and second order derivatives).

As you will see on the GitHub page, the programme uses the open source OpenCV and VLFeat libraries. Also, it was developed on a Fedora 19 64bit machine using the Eclipse IDE.

I am aware that without proper documentation, the learning curve for using this programme might be steep but I hope I will find time to prepare something. All configuration happens inside the config.xml file. Using this file, after the training procedure (function zero), the programme creates a gmm_parameters.xml file which must be used aftewards during all other program functions (one, two and three) in order to produce the fisher.xml files that contain the image’s Fisher vectors and the .csv files that contain the Euclidean distances. The results are very similar to those of the publication using the same evaluation methods.

Nonetheless, improvements can be made. For example, during the training process, there is no random selection of the image features. This will improve the performance of the programme. Various other improvements are also possible.

[1] B. Ma, Y. Su, and F. Jurie, “Local descriptors encoded by fisher vectors for person re-identification” in ECCV Workshops (1) (A. Fusiello, V. Murino, and R. Cucchiara, eds.), vol. 7583 of Lecture Notes in Computer Science, pp. 413–422, Springer, 2012.

A Creative Commons music video made out of other CC videos

Hello! Let’s go straight to the point. Here is the video:

…and here are the videos that were used having the Creative Commons Attribution licence: They are downloadable via Vimeo, of course.

Videos available from NASA and the ALMA observatory were also used.

The video (not audio) is under the Creative Commons BY-NC-SA licence, which I think is quite reasonable since every scene used from the source videos (ok, almost every scene) has lyrics/graphics embedded on it.

I hope you like it! I didn’t have a lot of time to make this video but I like the result. The tools I used are not open source unfortunately, because the learning curve for these tools is quite steap. I will definitely try them in the future. Actually, I really haven’t come across any alternative to Adobe After Effects. You might say Blender…but is it really an alternative? Any thoughts?

PS. More news soon for the Sopler project (a web application for making to-do lists) and other things I’ve been working on lately (like MQTT-SN).