A simple demo on the impact of latency in teleoperation

Hello everyone!

It’s been so long since I wrote a blog post. You can’t imagine how much I’ve been waiting for a reason to start writing and…here it is!

Since the last time I wrote something, I finished with my MSc degree at NKUA and started my PhD at King’s College London. I am now studying and working on haptics over the upcoming 5G network infrastructure. How they work, how to improve it and how to make it more usable for a number of use cases are a few questions I’m looking to answer.

The demo

I’m happy to say that I have just uploaded the first demo I ever made since I joined the KCL-Ericsson 5G lab. It’s rather simple but gets the job done. It also doesn’t fail to impress people who don’t know how it’s like to teleoperate something under latency.

So, here it is: https://github.com/constanton/DelayedTeleoperationDemo

It’s actually a modified version of one of the examples provided by the Chai3D C++ framework that I’ve been using lately. The modification was simply creating one buffer at the position channel and another one at the feedback channel. As you increase the latency (ie. the size of the buffers) above 10ms, the end result is the de-synchronization of the data you receive and the data you send making the haptic device to become unstable…it really starts to “kick” when you touch anything with the grey ball.

The haptic device used is the Sensable Phantom Omni (IEEE 1384 version) which works only under Windows (at least for me). So, in case anyone has made it work under Linux, if possible, please send over a how to 🙂

There is room for improvements, further modifications and optimizations. One idea is to implement at least one stability control algorithm to compare it to the usage without one.

Anyway, here is a pic from the application.

Application screenshot

You can slide the cylinder along the string using the grey ball which you control from the haptic device. You can change the latency (bottom right) with + and – keys.

Change a Font Awesome icon on hover (using content) + Sopler news!

Hi everyone! The past few days, we made some major updates on Sopler that we started designing a long time ago.

It is now possible to set a due date or edit your items using a brand new options menu. Also, when you enter a YouTube link, a (auto-scalable) player will appear on the list! 🙂

Nonetheless, this post concerns changing a Font Awesome icon to another Font Awesome icon when the first one is on hover.

Firstly, I came across this post and a few (unrelated but helpful) answers on Stack Overflow that used the content property. Then, I thought that this might work pretty well and it did.

For example,

<div class="divclass">
  <i class="fa fa-circle-o"></i>
</div>

using this CSS:

.divclass{
  font-size:5em;
  color:grey;
  cursor:pointer;
}

.divclass:hover .fa-circle-o:before{
  content:&quot;f05d&quot;;
  color:green;
  opacity:0.4;
}

OK, the div element will be a full-width rectangle (use your Developer Toolbar to check what’s going on), but you can modify it later. Anyway, the result is: http://jsbin.com/noqiwi/

It might be trivial but it’s also a lot easier than other implementations I’ve seen so far.

An implementation of a person (and object) re-identification method

Hi! I would like to present you my latest upload on GitHub:

https://github.com/constanton/bLDFV

It’s the implementation of a research publication on human re-identification [1] in C++ (…with a very minimal OO design though).

This programme was created for academic purposes (!) and it can most probably also be used to re-identify other (similarly distinctive) objects as well, although this has not been tested. It divides the image into 3×4 blocks and uses very simple features (HSV values, first and second order derivatives).

As you will see on the GitHub page, the programme uses the open source OpenCV and VLFeat libraries. Also, it was developed on a Fedora 19 64bit machine using the Eclipse IDE.

I am aware that without proper documentation, the learning curve for using this programme might be steep but I hope I will find time to prepare something. All configuration happens inside the config.xml file. Using this file, after the training procedure (function zero), the programme creates a gmm_parameters.xml file which must be used aftewards during all other program functions (one, two and three) in order to produce the fisher.xml files that contain the image’s Fisher vectors and the .csv files that contain the Euclidean distances. The results are very similar to those of the publication using the same evaluation methods.

Nonetheless, improvements can be made. For example, during the training process, there is no random selection of the image features. This will improve the performance of the programme. Various other improvements are also possible.

[1] B. Ma, Y. Su, and F. Jurie, “Local descriptors encoded by fisher vectors for person re-identification” in ECCV Workshops (1) (A. Fusiello, V. Murino, and R. Cucchiara, eds.), vol. 7583 of Lecture Notes in Computer Science, pp. 413–422, Springer, 2012.

A Creative Commons music video made out of other CC videos

Hello! Let’s go straight to the point. Here is the video:

…and here are the videos that were used having the Creative Commons Attribution licence: http://wonkydollandtheecho.com/thanks.html. They are downloadable via Vimeo, of course.

Videos available from NASA and the ALMA observatory were also used.

The video (not audio) is under the Creative Commons BY-NC-SA licence, which I think is quite reasonable since every scene used from the source videos (ok, almost every scene) has lyrics/graphics embedded on it.

I hope you like it! I didn’t have a lot of time to make this video but I like the result. The tools I used are not open source unfortunately, because the learning curve for these tools is quite steap. I will definitely try them in the future. Actually, I really haven’t come across any alternative to Adobe After Effects. You might say Blender…but is it really an alternative? Any thoughts?

PS. More news soon for the Sopler project (a web application for making to-do lists) and other things I’ve been working on lately (like MQTT-SN).

2013 in review powered by WordPress

The WordPress.com stats helper monkeys prepared a cute annual report for this blog for 2013 🙂 For 2014 I’ve bought tickets for FOSDEM 2014 to represent Sopler and share what cool things can happen with it.

Right now, I’m taking a look at Mozilla’s version of the l10n.js library in order to localise Sopler (at least on FirefoxOS).

Well, that’s it! Have a great 2014! By the way, for this year, the 3-years-in-a-row champion is again “My Fedora 15 tweaks for an SSD” (ok, it accumulates all previous years but nevertheless it still gets views). I guess they’ve become even more popular! (both Fedora and SSDs).

PS. A weird thing to say but, if anybody works for a new wave/post-punk/indie rock record label, comment or PM me! I got good news for you from Wonky Doll and the Echo.

Click here to see the complete report.