post

The Moon was formed only 50 million years after the formation of the solar system

The Moon was formed 4.51 billion years ago, only 50 million years after the formation of the solar system according to a new study by researchers at the University of Cologne. This is a period placed sensibly before that previously estimated by some theories according to which our natural satellite would have formed about 150 million years after the formation of the solar system.

Researchers analyzed samples of the lunar surface brought to Earth following NASA’s Apollo missions. We are talking about various kilos of lunar samples, many of which have not yet been analyzed in depth, which contain numerous traces of evidence concerning the formation of the Moon despite having been found from the surface.

In particular, they analyzed the chemical signatures of different elements, including hafnium and tungsten, present in these rocks, elements that were formed at different times and also indicate the solidification of the magmatic ocean, as stated by Raúl Fonseca, one of the researchers involved in the study together with his colleague Felipe Leitzke.

These new data do not refute the theory regarding the formation of the Moon following an enormous clash between the Earth and another planetary body the size of Mars, and indeed probably corroborates it. Following this clash the debris that formed then went on to form the Moon which was initially covered by a vast magmatic ocean composed of rocks that had melted during the impact.

The rocks began to solidify relatively early, already 50 million years after the formation of the solar system itself. These are the same rocks that today can be found on the surface of the Moon, as Maxwell Thiemens, the lead author of the study, states.

These are observations that here on Earth, where a similar magmatic ocean was present, are no longer feasible due to our geology more alive than ever. It is therefore a “unique opportunity to study planetary evolution,” as Peter Sprung, another author of the study, states.

post

New 2-photon microscope exceeds speed imaging brain imaging

A new 2-photon microscope that “breaks a long-standing speed limit” for brain activity analysis was built by a group of scientists at the Janelia Research Campus of the Howard Hughes Medical Institute.

The microscope, which would be 15 times faster than previously thought possible by the scientists themselves, is so fast in collecting data that it can also record the voltage peaks of the same neurons and the release of chemical messengers over large areas of the brain. This allows for the simultaneous monitoring of hundreds of synapses, something that was previously considered a dream by neuroscientists involved in brain imaging and that today could be a reality of everyday life.

These are events that, in addition to having to be analyzed in the brains of patients or live animals (and this already poses great difficulties because a living brain is substantially impenetrable with the classic optical microscopy), can last only a few milliseconds. The study, published in Nature Methods, explains how the “classic” two-photon microscopes can encounter difficulties in analyzing brain activities since each measurement requires several nanoseconds. This limits the speed at which an image can be captured and therefore limits the data that can be acquired.

“We exceeded this limit by compressing the measurements,” says Kaspar Podgorski, one of the researchers working on the project. It refers to measurements of what might appear to be a fundamental limit, ie the number of pixels multiplied by the minimum time per pixel.

The new device is a scanning-line angular projection microscope (Scanned Line Angular Projection or SLAP), which allows a beam of light to pass over a sample through four different levels and does not record every pixel of the beam but compresses the points of the ray line itself in a number.

post

Software can tell if a smile is authentic or not

Computer-aided techniques for analyzing human facial expressions are becoming increasingly complex and efficient, and a Bradford University researcher, Hassan Ugail, is leading a new project in this field.

Specifically, the researcher is creating, together with his team, a software that is able to identify false facial expressions, the ones we constantly do when we talk to a person, from the half-smiles to the movements of the eyebrows or the mouth. The software is able in particular to analyze the smile on the face of a person and to determine whether it is authentic or not.

To do this it analyzes even the slightest movements of other parts of the face, something that a person can hardly do even because these minimal movements have a very short duration. The researchers used the machine learning technique: they included various data, basically images of people expressing real smiles or images of false smiles, in a computer program. After the “training” phase, the software was able to find significant differences in the slightest movements of the mouth and cheeks and was able to identify the real expressions by discarding the false ones.

As the same Ugail specifies, there are minimal differences between a true and a false smile and this concerns above all the major zygomatic muscle, which causes the mouth to curve upwards, and the muscle of the orbicularis oculi, which is responsible for those classic folds that form around the eyes when you smile: “In false smiles are often only the muscles of the mouth that move but, as humans, we often do not detect the lack of movement around the eyes. Computer software can detect it much more reliably.”