I think it’s easy to paint the optimistic picture of what, if we get all of this right, it could mean for our future. One trillion devices isn’t an absurd number. But these types of new technology can be very fragile. It’s interesting comparing CRISPR [the gene-editing technology] to genetically modified crops: GM crops had some bad publicity early on, and that essentially killed the area for a while, whereas CRISPR has had lots of positive publicity: it’s cured cancer in children. IoT will be similar. If there are missteps early on, people will lose faith, so we have to crack those problems, at least to a point where the good vastly outweighs the bad.
Planning out and managing microservices seems like another area where EAs have a strong role for both initial leadership and ongoing governance. Sure, you want to try your best to adopt this hype-y practice of modularising all those little services your organisation uses, but sooner or later you’ll end up with a ball of services that might be duplicative to the point of being confusing. It’s all well and good for developer teams to have more freedoms on defining the the services they use and which one they choose to use, but you probably don’t want, for example, to have five different ways to do single sign-on. Each individual team likely shouldn’t be relied upon to do this cross-portfolio hygiene work and would benefit from an EA-like role instead, someone minding the big ball of microservices.
“Brainternet is a new frontier in brain-computer interface systems,” said Adam Pantanowitz, ... According to him, we’re presently lacking in easily-comprehensible data about the mechanics of the human brain and how it processes information. The Brainternet project aims “to simplify a person’s understanding of their own brain and the brains of others.” “Ultimately, we’re aiming to enable interactivity between the user and their brain so that the user can provide a stimulus and see the response,” added Pantanowitz, noting that “Brainternet can be further improved to classify recordings through a smart phone app that will provide data for a machine-learning algorithm. In future, there could be information transferred in both directions – inputs and outputs to the brain.”
The trend of applying cyber security practices to test systems makes sense for several reasons, most notably the increased cyber-security incidents that exploit unmonitored network devices. The second reason this trend makes sense is that security practices and technology for general-purpose IT systems are more mature. However, this trend does not make sense categorically for at least two reasons. Primarily, IT-enabled test systems are less tolerant of even small configuration changes. Users of IT systems can tolerate downtime and may not even perceive application performance differences, but special-purpose test systems (especially those used in production) often cannot tolerate them. Second, test systems often have security needs that are unique. They typically run specialized test software not used on other organization computers
In an email to Singularity Hub, series creator EJ Kavounas said, “With everyone from Elon Musk to Stephen Hawking making dire predictions about the possible dangers of machine intelligence, we felt the character could inject black comedy while discussing real issues of consciousness and humanity’s relationship with the unknown.” Nina starts with Alastair Reynolds, a psychiatrist. During their meeting she explains her past to him, and after watching a recording in which she detonated a missile to kill someone, she breaks into tears. So we know she has feelings—or at the very least, she’s good at faking them. “The biggest thing I try to keep in mind when playing Nina is that everything she does and says was specifically programmed to mimic human behavior and language,” according to actor, Lana McKissack, who plays Nina.
While LoRa offers the benefit of addressing ultra-low-power requirements for a range of low-bit-rate IoT connectivity, it is faced with a range limitation and must piggyback an intermediary gateway before data can be aggregated and sent to a central server. The cost of deploying multiple gateways for a range of different IoT scenarios would defeat the very economic purpose of using an arguably low-cost solution like LoRa. Moreover, solutions like LoRa are not suited for a wide range of those IoT applications where HD and ultra-HD streaming is a prerequisite. 5G would potentially address a range of both low-bit-rate and ultra-HD IoT connectivity requirements, while also obviating the need to have an intermediary gateway, thus leading to additional cost savings. Moreover, 5G would have the potential to cover as many as one million IoT devices per square kilometer
As electric power becomes more important for everything from ubiquitous computing to transport, researchers are increasingly looking for ways to avoid some of the drawbacks of current electricity storage devices. Whether they are batteries, which release a steady stream of electric current, or supercapacitors, which release a sharper burst of charge, storage devices depend on conductive electrolyte fluids to carry charge between their electrodes. Susceptible to leakage and often flammable, these fluids have been behind many of the reported problems with batteries in recent years, including fires on board aircraft and exploding tablet computers (the later being caused by short-circuiting inside miniaturised batteries).
Unlike its predecessors, the underlying Lambda infrastructure is entirely unavailable to sysadmins or developers. Scale is not configurable, instead Lambda reacts to usage and scales up automatically. Instead of using EC2, Lambdas instead use ECS, and the containers are not available for modification. In place of a load balancer, or an endpoint provided by Amazon, if you want to make Lambdas accessible to the web it must be done through an API Gateway, which acts as a URL router to Lambda functions. ... One of the major advantages touted by Amazon for using Lambda was reduced cost. The cost model of Lambda is time-based: you’re charged for requests and request duration. You’re allotted a certain number of seconds of use that varies with the amount of memory you require. Likewise, the price per MS varies with the amount of memory you require.
The majority of companies underestimate the importance of rich and diverse data sets to train algorithms, and especially the value of “negative data” associated with failure to successfully execute a task. Talent shortages and unequal access to data engineers and AI experts compound matters. Privacy and other regulations as well as consumer mistrust also temper progress. Whereas such barriers may be expected to decrease over time, there are also more subtle barriers to AI’s adoption that will need to be overcome to unlock its full potential. Algorithmic prowess is often deployed locally, on discrete tasks; but improved learning and execution for one step of a process does not usually improve the effectiveness of the entire process.
“The ideal wearable portable solar cell would be a piece of textile. That exists in the lab but is not a sellable product.” This new research from the RIKEN and Tokyo teams has taken that textile a big step forward from lab curiosity to actual product. What they have done is create a cell so small and flexible that it could, in time, be seamlessly woven into our clothing, rather than awkwardly placed on the outside of a jacket. These solar cells are phenomenally thin, measuring just three millionths of a meter in thickness. Given a special coating that can let light in while keeping water and air out, the cell was able to keep efficiently gathering solar energy even after being soaked in water or bent completely out of its original shape.
Quote for the day:
"Change is the end result of all true learning." -- Leo Buscaglia