Training RPI4 on a Google Coral TPU

Hello,

Is anyone using the CORAL TPU on a RPI4 with Tensorflow Lite yet ?
I guess it is the next logical step but I cannot find any information related to this.

Thanks

I tried for a few hours to convert a trained model to the format to run on Coral. I had problems with various combinations of using a Keras HDF5 file or the checkpoint/[index|data] files, and with or without some image rescaling operations I wanted to use. It also wasn’t clear whether I should use some command line tool to do the conversion, or write my own script. As with everything in TensorFlow, there are k ways to do it, and each has 1/k of a full complement of documentation.

However, this was, I think, back in version 1.12. I’ve had on my back-burner for a while the task of investigating whether this has improved with 1.15.

I still have no intention of moving to 2.0. I’d be very happy to if Google was willing to focus on creating a single, opinionated framework, or be honest about the fact that they now have two completely different frameworks, but that seems unlikely.

Separately, I was doing all of this on a regular x86 Ubuntu machine. So, there, I at least have high confidence that I could get it to work at some point. But I have no idea what the situation is on Arm+Raspbian, since we do rely on Google’s driver software to provide IO to the accelerator. If that works, it would go a long way towards making RPi competitive with the various Nvidia Tegra SBCs for the kind of streaming inference tasks performed for Donkey Car and similar projects.

Thanks for the reply.
I am just barely getting things together as I was initially planning to do a competition with JETBOT size vehicles, but I guess that it would make more sense to compete on an existing race format like Donkey Car league…and now AMAZON is getting involved as well !!

My initial tests using the RPI4 and Coral are based on Les Wright implementation , running the ‘tpu_models/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite’ and I could achieve up to 40 Fps.

These are my competing candidates:
Left the JETBOT power by NANO and Right is the RPI4Bot accelerated with Google CORAL

I can see on the Slack Channel that @tawnkramer and others are working on it, I think it might be interesting to get an update on the forum as well

Tawn 3:21 PM

After that we can run the edge_tpucompiler and hopefully have something that will run on the coral. The python coral api only takes a int8 tensor at inference time. Since we normalize the image before feeding it to the training, it’s a float tensor range 0-1. We can’t really do the same for run-time since we aren’t dealing with a float tensor. So we could remove the normalization from the original train loop. That might work but we would need to add some BatchNorm layers to the model and verify that things still work.