Raspberry Pi GPIO states at boot time

When external hardware is connected to the Raspberry Pi, it can be important to know the initial state of the GPIO pins at boot time. For example, if we have a motor connected to a GPIO pin, it could be a problem if the motor starts running as the system boots up. Ideally, we would want all the pins defined as inputs at start up, but that doesn’t seem to be the case with some of the Pi GPIO pins.

Fortunately, there’s now a solution in the form of the device tree pin configuration. This involves creating a pin configuration source file, and compiling that to a binary “blob” which is loaded by the OS at start up. This source file can configure pins as inputs, outputs or in special function modes (UART serial port for example). It can also configure whether a pin has pull-up or pull-down enabled.

This proved very useful for my Espiresso project, when I needed a pin to enable the pump at start up. Having run out of general GPIO pins, I was forced to use GPIO15 (RXD) which is configured as a pull-up UART pin by default. This would have resulted in the pump running as the system booted. However, by editing the device tree configuration, it was possible to set this pin as an input with pull-down mode, which prevents the external hardware being switched on during boot. The GPIO pin can then be reconfigured as an output by the application, after the Pi has booted.

As an update to this, there have been a couple of important developments with device-tree since I wrote this, which I’ll summarise briefly here:

  • You can now create device tree overlays, to configure specific GPIO pins at boot. However, with NOOBS these do not take effect immediately (in which case the dt-blob.bin will still be needed).
  • With NOOBS, the dt-blob.bin file needs to be placed on the recovery partition. Even then, I found the only way to ensure the pins stay in the desired state through the entire boot process is to have the dt-blob.bin on both recovery and /boot partitions.
  • In conclusion: avoid NOOBS for this application, and go with a vanilla Raspbian install

Persuading NVIDIA 3D Vision Pro to Sync with an unsupported Projector

Recently I did some work to persuade the NVIDIA 3D Vision Pro glasses work with six Optoma EW610ST Projectors, and made some interesting discoveries in the process. This projector is not an officially supported model on the NVIDIA compatibility list.

The projectors in our setup are connected in Mosaic mode, and we were using active stereo at 1280×720 and 120Hz on each projector, with the 3-pin VESA stereo sync cable from the graphics card connected to the NVIDIA Pyramid. In the default set up, it results in terrible ghosting (cross-talk) when used with the NVIDIA glasses.

The first problem is that these projectors have only one 3D mode: DLP-Link. This is unfortunate as DLP-Link uses a white flash to synchronise the glasses, whereas the NVIDIA 3D Vision Pro glasses use RF. The real DLP-Link glasses will obviously blank themselves during the white flash, but the NVIDIA shutters are open, so this results in poor contrast (effectively, slightly increasing the brightness of the entire scene). This was a problem in our case, as we were edge-blending the projectors. Black level blending with an offset would be possible, but would reduce the contrast of the whole image. The solution to this first problem was to simply disable 3D mode on the projector! It still seems to display 120Hz frame sequential stereo with 3D disabled.

The next problem is that there is really bad ghosting, because the timing of the glasses is wrong. To find out what was going on, I decided to measure the light output of the projector against the stereo sync signal.

I didn’t have a light sensor to hand, but fortunately LEDs can be used as a (rather insensitive) light sensor. In this case, I used a clear infra-red LED situated about 1cm from the lens, connected directly across one input of my ‘scope. To improve the response speed, I put a 1K resistor in parallel with the LED to help discharge it quickly.

LED as Light Sensor

I connected the stereo sync signal (output from the graphics card) to the second input of the scope, and set the ‘scope to trigger from the sync input.

Then I wrote some OpenGL code to draw pure white on the left stereo channel, and pure black on the right channel, so the projector was flickering full-screen white/black at 120Hz. The results are shown below. The red trace is the stereo sync signal, and the yellow trace is projector light output.

Optoma EW610ST Stereo Phase

As can be seen from the graph above, the two are out of phase with some of the image visible outside the sync period. I measured this at about 316us delay, which is about 3.8% of the frame time (which is 8.33ms for 120Hz).

The strange shape of the yellow waveform is due in part to the way the DLP projector works, with a spinning colour wheel which has separate segments for red, green, blue and white (some models use different colour combinations). Also, amongst other effects, the different colour components have different luminance levels, and we are only looking at a subset of the mirror array with our sensor.

Since the stereo sync is out of phase with the projector, the logical next step is to delay the sync signal.

Ideally this sort of stereo phase adjustment would be possible from the NVIDIA drivers (at least on the Pro version), but it doesn’t seem to be? I couldn’t find one, and there don’t seem to be any functions in the NVAPI for altering the timings either.

There are some specific parameters for each projector manufacturer/model stored in the registry (and driver INF file), which look like possible timings for the glasses:

HKLM\System\CurrentControlSet\Services\NvStUSB\Parameters\SAM_8524

However, the format is proprietary and undocumented. I made a start at successfully understanding some of these, but it occurred to me that these parameters might not be used at all when the pyramid is fed with external sync, so it seemed easier to take the direct approach of making some hardware to delay the sync.

I decided to try and rectify this phase error in the sync signal, by delaying the sync signal before feeding it to the NVIDIA pyramid. To achieve this, I used an STM8S Discovery board which is very low-cost, easily programmed via USB, and capable of high timing precision.

The hardware is very simple: the sync output from the PC is connected via a 1K resistor to a GPIO input on the STM8S (the 1K resistor is just a precaution against toasting the very expensive graphics card!) Another GPIO output is used to feed the sync input of the NVIDIA pyramid. The only other connection is the common ground, and the board is powered over USB.

STM8S Discovery Board

The firmware uses interrupts to detect the high/low transition of the input, then schedules an onboard timer to generate another interrupt after a specified delay period has elapsed. It then drives the output high or low as required. This method will support any vertical refresh rate, and allows very high precision adjustment.

Having set this up to generate a 316us delay, the sync signal is now in phase with the projector:

Optoma EW610ST with delayed sync

However, the projector is illuminating the full frame period, and there is no dark interval at all for the glasses. This is due to the projector being set up in “Presentation” display mode, which is designed to maximise image brightness. In our case, this will result in ghosting, because the LCD shutters take a little time to ‘open’ and ‘close’, during which time we need the display to be dark.

The solution to this last problem is to alter the projector settings to select a mode which gives a reasonable dark interval for the glasses. On the EW610ST, this can be achieved by setting the Display Mode to sRGB. The final result with the 316us delay and sRGB mode selected is shown below:

Optoma EW610ST acceptable stereo mode

Finally, I tried this with the NVIDA glasses, and the ghosting had been eliminated. In a realistic scene, there is imperceptible ghosting, even with very high contrast areas.