CAJOE radiation detector & ESP8266: better connection via the headphone port

The cheap CAJOE Geiger tube radiation detector can be connected to an ESP8266 with WiFi in order to connect to Grafana and other consumers of data. I found however that the headphone port of the CAJOE works better than the VIN output, which all projects I saw use for connection.

The projects I saw recommend to connect the 5V signal output on the CAJOE labeled “VIN” with a 3.3V digital input of the ESP8266. Although 5V can potentially damage the ESP8266, it is correctly found that this does not happen, possibly because upon connection the voltage drops to about 3V.

This way however I could detect no counts on the microcontroller. The CAJOE continued to click, and the indicator LED did blink, but the ESP8266 did not detect any changes on its input pin. Using an oscilloscope it became clear immediately why: if connected this way, there are no impulses on VIN, at least with my device. The voltage remains steadily at about 3V, and does not drop briefly to zero upon each click of the detector.

This clarified the importance of a board modification on the CAJOE that some projects obliquely recommend as necessary for “more reliable” detection. It is changing the 470 kΩ Resistor R19 on the board to 1kΩ, or similar. Bypassing the resistor did lead to detectable signals, but I found the work to be unnecessary.

Using the oscilloscope I found that the headphone jack of the CAJOE already produces a perfectly fine 3.6V output signal on both channels of a stereo headphone. Upon connecting the ESP8266 I could directly count the impulses without any further modification of the CAJOE.

Fast as an arrow

An arrow is really fast, about 100 m/s fast when it leaves the bow, that’s about 360 km/h. A microprocessor is running at perhaps 3 GHz, so how far does an arrow move during one CPU cycle? 3 GHz means 3 thousand million cycles per second, and therefore an arrow moves in one cycle 30 nanometres. That’s the order of magnitude of structure elements on microchips, and is about the length of 30 atoms of silicon end-to-end.

Light, the fastest thing in the universe, itself moves only 10 cm in one CPU cycle, that’s for most people about the width of their hand. Since electric signals travel at most at the speed of light, this is the total length that impulses can travel within the CPU to accomplish whatever needs to be done in one cycle. If your memory module is 10 cm away on the mainboard from your CPU, just sending a signal out and back takes at least two cycles. This explains one of the speed limits computers had when they were large like cupboards, built from discrete components. If the cycles were too short, it would be difficult to keep distant parts of the computer in sync, because signals back and forth took longer than a cycle. This also gives a good reason why in Apple’s M processors the memory is right next to the processing units in the encapsulation. That’s bad for adding memory later and modularity, but signal paths are so much shorter. As processing speed is reaching limits of physics, it seems that such tight packages of processing and memory are going to be the common form of future processors, as they already are in processing units for mobile phones and tablets.

AI-Box adversarial training

An AI with unknown motivations of its own, and abilities vastly superior to ours is a mortal danger to the world. It could attack us in ways which we cannot even comprehend. Could we contain such a potentially dangerous superhuman artificial intelligence in a locked computer (“a box”) once it is created, or could it convince one of us in a dialog to let it loose on the world? Could the AI learn to manipulate humans, and get out of the containment just by asking the keepers in the right way?

Continue reading

Turning climate change to stone

Excess carbon dioxide in the atmosphere changes Earth’s climate to far less hospitable to us. However, Earth’s atmosphere is but a thin sliver of gas around a huge ball of rock. How large is our carbon dioxide problem, measured on geological scales?

File:Thin Line of Earth's Atmosphere and the Setting Sun.jpg
Earth’s thin layer of atmosphere with the rising sun shining through, as seen from the International Space Station. (NASA)

Chalk is calcium carbonate or CaCO3, a compound of calcium and carbon dioxide. For much of Earth’s history, marine creatures from microscopic free-floating algae, to oysters, to coral reefs used chalk to build their shells and houses. When these creatures died, their shells sank to the bottom of the sea, or were built upon by new generations of creatures. All CO2 in them was safely stashed away from the atmosphere.

Over aeons thick layers of soft chalk formed around the globe. When we happen to live on land where there used to be oceans in earth’s history, we might be walking on layers of soft white matter tens or hundreds of meters deep. We may admire these layers when they are exposed, as in the white cliffs of Dover or in the mountains of Lebanon.

White Cliffs of Dover 02.JPG
The white chalk cliffs of Dover. (Immanuel Giel)

Even today, the ocean floor is being covered every day with more chalk. Assuming through some force we could encourage the creatures of the oceans to take up all the excess carbon dioxide from the atmosphere and store it away as chalk, how thick a layer would that form on the ocean floor?

Since 1850, humanity emitted about 4.4 * 1017 g carbon as CO2. Chalk or CaCO3 is a sink of carbon dioxide if chalk is produced from quicklime, CaO, according to the following formula:

CaO + CO2 → CaCO3

Humanity emitted enough carbon dioxide for forming 2.49 * 1018 g chalk this way:

MCaCO3 = MC/MwC * (MwC + MwCa + 3MwO) = 4.4 * 1017 g / 12 g/mol * (12 + 40 + 16)g/mol = 2.49 * 1018 g

The volume of chalk would be 1400 cubic kilometres. This follows from the mean density of chalk, which is 1.79 * 10^6 g/m3:

MCaCO3 / 𝝔CaCO3 = 1.39 * 1012 m3 = 1.39 * 103 km3.

That seems much, but how thick would be this layer spread out evenly on the sea-bed which occupies 3.61 * 108 km2? It would be 1.39 * 103 km3 / 3.61 * 108 km2 = 3.9 * 10-6 km. In more familiar units, it would be a layer of 3.9 mm chalk.

In consequence, what is the largest threat to humanity is a humblingly slim layer of 3.9 mm chalk over the ocean floor for geology. If only we knew how to entice sea animals and plants to use our emissions in the atmosphere for building shells fast enough.

AI-Driven UI

User interfaces connect users to software. If good, UI give access to all functions, please the eye, radiate power and trustworthiness. They unobtrusively do their job, and sometimes, they even make happy. Powerful software amounts to nothing if it can’t speak with its users. How can we make the best possible UI, and how can artificial intelligence help computers talking to humans?

Being made for humans, a UI must work with human faculties and limitations. Keeping those faculties and limitations in mind, we can go a long way towards making a good UI without using any AI. But we will see that only AI can ultimately solve the fundamental dilemma of how to make powerful software that is also easy to use.

Continue reading

Being naughty with R

If you sneak up to the unguarded R-session of a friend and enter this, some might soon consider exorcism:

`(` <- function(x) if(is.numeric(x)) runif(1)*x else x

In R the bracket, as in (1+2), is also a function, just as the + part of the expression is. The same is true of {}, as used in function declarations, if-clauses and for-loops. Since R is so flexible, you may well reassign these functions.

This is a great way to make enemies delightful intellectual challenge. Bonus points for entering the definition into an .Rprofile so it is not lost too easily when restarting R.

The formation of a convolutional layer: feature detectors in computer vision

Convolutional layers of artificial neural networks are analogous to the feature detectors of animal vision in that they both search for pre-defined patterns in a visual field. Convolutional layers form during network training, while the question how animal feature detectors form is a matter of debate. I observed and manipulated developing convolutional layers during training in a simple convolutional network: the MNIST hand-written digit recognition example of Google’s TensorFlow.

Convolutional layers, usually many different in parallel, each use a kernel, which they move over the image like a stencil. For each position the degree of overlap is noted, and the end result is a map of pattern occurrences. Classifier network layers combine these maps to decide what the image represents.

The reason why convolutional layers can work as feature detectors is that discrete convolution, covariance and correlation are mathematically very similar. In one dimension:

Multiply two images pixel-by-pixel, then add up all products to convolve them: $$(f * g)[n] = \sum_{m=-\infty}^\infty f_m g_{(n – m)}$$

Subtract their means E(f) first to instead get their covariance: $$\mathrm{cov} (f,g)=\frac{1}{n}\sum_{m=1}^n (f_m-E(f))(g_m-E(g))$$

Then divide by the variances σ(f) to get their Pearson correlation: $$\mathrm{corr}(f,g)={\mathrm{cov}(f,g) \over \sigma_f \sigma_g} $$

Continue reading

Explore the state of the UK, October 2017

An earlier post explored the winning and losing parts of London, as measured by the success of different kinds of cheap and expensive food-selling enterprises.

Assayed the same way (with the same shortcomings, too!), how did the rest of the UK do?  A first answer is: outside London, not many places have done very well, but the Sheffield area is a clear winner. Less densely populated areas are the ones losing most.

Everyone has different questions though: are cheap or expensive venues becoming more successful where I (want to) live? What kind of shops are opening in the South-West? Which parts of the country is Pizza Chain X focusing on? In the linked interactive map I created you can look for yourself. The map answers both questions about the whole of the UK, and about favourite counties, cities or boroughs (just zoom in!).

As a practical note, sometimes a far out region appears surprisingly full of activity. It is worth double-checking this. It may be because the local authority dumped or purged a lot of businesses at the same time (usually in smaller places). It helps to just shift this area to one side, and then the heat map will return to a more useful scale for the rest.

The list below the map shows the coming and going businesses in the area you are looking at. In the left pane you can filter by business name or type, e.g. if you were wondering what supermarkets or specifically Tesco does in a particular area.

Feedback is most welcome below this post. For the future I am considering adding comparisons between different time periods to make this tool even more useful. Please be kind to the tiny server if it is slow. My thanks to the makers of R, Shiny, ggmap and leaflet, and to EC2 for the server.

New and closing restaurants and sandwich shops in the UK.

Combining probability distributions

This table is yet unfinished.

If I observe the sum of two processes with known distributions, the distribution of the observations is expected to be …

 + Normal Poisson Binomial Uniform
Normal Normal See here
Poisson Poisson,
normal if many summands
Binomial Binomial, if common p,
Poisson, if many summands,
Poisson binomial otherwise, note also the binomial sum variance inequality and this.
Uniform Irwin-Hall

Wikipedia has some more. A general discussion of probability distribution convolutions is for example here.

Feature detectors in animal vision

 Image feature detectors are a common concept between mammalian vision and computer vision. When using them, a raster image is not directly processed to identify complex objects (e.g. a flower, or the digit 2). Instead feature detectors map the distribution of simple figures (such as straight edges) within the image. Higher layers of the neural network then use these maps for distinguishing objects.

In the mammalian brain’s visual cortex (which is at the back of the head, at the furthest possible point from the eyes) the image on the retina is recreated as a spatially faithful projection of the excitation pattern on the retina. Overlapping sets of feature detectors use this as input.

From eyeball to visual cortex in humans. Note the Ray-Ban-shaped area at the back of the brain where the retinal excitation pattern is projected to with some distortions. (From Frisby: Seeing: the computational approach to biological vision (2010), p 5)

How we know about retinotopic projection to the visual cortex: an autoradiography of a macaque brain slice shows in dark the neurons that were most active in result of the animal seeing the image on top left. (From Tootell et al., Science (1982) 218, 902-904.)

A feature detector neuron becomes active when its favourite pattern shows up in the projected visual field – or more exactly in the area within the visual field where each detector is looking. A typical class of detectors is specific for edges with a specific angle, where one side is dark, and the other side is light. Other neurons recognise more complex patterns, and some also require motion for activation. These detectors together cover the entire visual field, and their excitation pattern is the input to higher layers of processing. We learned about these neurons first by sticking microelectrodes into the visual cortex and measuring electrical activity. When lucky, the electrode measured the activity of a single neuron; then by showing different visual stimuli the activation pattern of the neuron could be mapped.

A toad’s antiworm detector neuron reacts to a stripe moving across its receptive field. The antiworm may move in any direction, but only crosswise for the neuron to react. The worm detector, for comparison, would react if the stripe moves lengthwise. Toad at the right side with microelectrode, the oscillogram above the screen shows the tapped feature detector neuron’s activity. (Segment from Jörg-Peter Ewert, Gestalt Perception in the Common Toad – 3. Neuroethological Analysis of Prey Recognition.)

Continue reading