Daily Tech Digest - June 11, 2021

Why API Quality Is Top Priority for Developers

Processes such as chaos engineering, load testing and manual quality assurance can uncover situations where an API is failing to handle unexpected situations. Deploying your API to a cloud provider with a compelling SLA instead of your own hardware and network shifts the burden of infrastructure resiliency to a service, freeing your time to build features for your customers. A comprehensive suite of automated tests isn’t always sufficient to provide a robust API. Edge cases, unexpected code branches and other unplanned behavior may be triggered by requests that were not considered when writing the test suite. Traditional automated tests should be complimented by fuzz testing to help uncover hidden execution paths. ... It is expected that most APIs are built on layers of open source libraries and frameworks. Software composition analysis is a necessity to stay on top of zero-day vulnerabilities by identifying vulnerable dependencies as soon as they are discovered. OWASP guidance is a must-have—directing API developers to implement attack mitigation strategies such as CORS and CSRF protection. Application logic must be well tested for authorization and authentication.


New Ransomware Group Claiming Connection to REvil Gang Surfaces

Like many established ransomware operators, the gang behind Prometheus has adopted a very professional approach to dealing with its victims — including referring to them as "customers," PAN said. Members of the group communicate with victims via a customer service ticketing system that includes warnings on approaching payment deadlines and notifications of plans to sell stolen data via auction if the deadline is not met. "New ransomware gangs like Prometheus follow the same TTPs as big players [such as] Maze, Ryuk, and NetWalker because it is usually effective when applied the right way with the right victim," Santos says. "However, we do find it interesting that this group sells the data if no ransom is paid and are very vocal about it." From samples provided by the Prometheus ransomware gang on their leak site, the group appears to be selling stolen databases, emails, invoices, and documents that include personally identifiable information. "There are marketplaces where threat actors can sell leaked data for a profit, but we currently don't have any insight on how much this information could be sold in a marketplace," Santos says


Google is using AI to design its next generation of AI chips more quickly than humans can

Google’s engineers trained a reinforcement learning algorithm on a dataset of 10,000 chip floor plans of varying quality, some of which had been randomly generated. Each design was tagged with a specific “reward” function based on its success across different metrics like the length of wire required and power usage. The algorithm then used this data to distinguish between good and bad floor plans and generate its own designs in turn. As we’ve seen when AI systems take on humans at board games, machines don’t necessarily think like humans and often arrive at unexpected solutions to familiar problems. When DeepMind’s AlphaGo played human champion Lee Sedol at Go, this dynamic led to the infamous “move 37” — a seemingly illogical piece placement by the AI that nevertheless led to victory. Nothing quite so dramatic happened with Google’s chip-designing algorithm, but its floor plans nevertheless look quite different to those created by a human. Instead of neat rows of components laid out on the die, sub-systems look like they’ve almost been scattered across the silicon at random.


More and More Professionals Are Using Their Career Expertise to Launch Entrepreneurial Ventures

The first step is to immerse yourself within your training and specialty and have the confidence to be a key thought leader in the space. Do the extra research, spend the time to learn all of the new information and data in your field to truly understand the opportunity within. "I have been fortunate to be involved with several top academic institutions during my training. While the training was fantastic, there were areas that I felt could be improved for the ultimate outcome of increased access to high-quality healthcare," says Dr. Bajaj. "Thankfully, this vision has resulted in great outcomes and happy patients." ... "Ready. Fire. Aim!" as Dr. Bajaj puts it, "Time was not waiting for me to be fully prepared. Sometimes you have to take the leap." In entrepreneurship, there are no guarantees, which is quite different from some of the career paths that we have trained for our entire academic life. Guaranteed salary, retirement plans, and annual bonuses are far from promised in your own business, and it is important to adapt accordingly. Everything will not go according to plan, and it is important to find comfort with that. As long the launchpad for growth has been established - patience is the biggest challenge, not security.


CISOs: It's time to get back to security basics

The goal of cybersecurity used to be protecting data and people's privacy, Summit said. There has been a major shift in that thinking. "It's one thing to lose a patient's data, which is extremely important to protect, but when you start interrupting" people's ability to travel or the food supply chain, "you have a whole different level of problems … It's not just about protecting data but your operations. That's where major changes are starting to occur." Summit added that he has long said if companies were making cybersecurity a high priority long before now, "we wouldn't be in this position" and facing government scrutiny. The cybersecurity field is "incredibly dynamic," Hatter said, and CISOs don't have the luxury of planning out three to five years. "We want to create and deploy a strategy that's sound and solid. But market forces demand; we recalibrate what we do and COVID-19 was a great example of that." CISOs now have to have as resilient a strategy as possible but be prepared to make changes. Managed security service providers can help, Summit said, but CISOs are still feeling overwhelmed.


New quantum entanglement verification method cuts through the noise

Virtually any interaction with their environment can cause them to collapse like a house of cards and lose their quantum correlations – a process called decoherence. If this happens before an algorithm finishes running, the result is a mess, not an answer. (You would not get much work done on a laptop that had to restart every second.) In general, the more qubits a quantum computer has, the harder they are to keep quantum; even today’s most advanced quantum processors still have fewer than 100 physical qubits. The solution to imperfect physical qubits is quantum error correction (QEC). By entangling many qubits together in a so-called “genuine multipartite entangled” (GME) state, where every qubit is entangled with every other qubit in that bunch, it is possible to create a composite “logical” qubit. This logical qubit acts as an ideal qubit: the redundancy of the shared information means if one of the physical qubits decoheres, the information can be recovered from the rest of the logical qubit. Developing quantum error-correcting systems requires verifying that the GME states used in logical qubits are present and working as intended, ideally as quickly and efficiently as possible.


DeepMind says reinforcement learning is ‘enough’ to reach general AI

Some scientists believe that assembling multiple narrow AI modules will produce higher intelligent systems. For example, you can have a software system that coordinates between separate computer vision, voice processing, NLP, and motor control modules to solve complicated problems that require a multitude of skills. A different approach to creating AI, proposed by the DeepMind researchers, is to recreate the simple yet effective rule that has given rise to natural intelligence. “[We] consider an alternative hypothesis: that the generic objective of maximising reward is enough to drive behaviour that exhibits most if not all abilities that are studied in natural and artificial intelligence,” the researchers write. This is basically how nature works. As far as science is concerned, there has been no top-down intelligent design in the complex organisms that we see around us. Billions of years of natural selection and random variation have filtered lifeforms for their fitness to survive and reproduce. Living beings that were better equipped to handle the challenges and situations in their environments managed to survive and reproduce. The rest were eliminated.


Evaluation of Cloud Native Message Queues

The significant rise in internet-connected devices will consequently have a substantial influence on systems’ network traffic, and current point-to-point technologies using synchronous communication between end-points in IoT-systems are not any longer a sustainable solution. Message queue architectures using the publish-subscribe paradigm are widely implemented in event-based systems. This paradigm uses asynchronous communication between entities and conforms to scalable, high throughput, and low latency systems that are well adapted within the IoT-domain. This thesis evaluates the adaptability of three popular message queue systems in Kubernetes. The systems are designed differently, where e.g. the Kafka system is using a peer-to-peer architecture while STAN and RabbitMQ use a master-slave architecture by applying the Raft consensus algorithm. A thorough analysis of the systems’ capabilities in terms of scalability, performance, and overhead are presented. The conducted tests give further knowledge on how the performance of the Kafka system is affected in multi-broker clusters using multiple number of partitions, enabling higher levels of parallelism for the system. 


Mysterious Custom Malware Collects Billions of Stolen Data Points

Researchers have uncovered a 1.2-terabyte database of stolen data, lifted from 3.2 million Windows-based computers over the course of two years by an unknown, custom malware. The heisted info includes 6.6 million files and 26 million credentials, and 2 billion web login cookies – with 400 million of the latter still valid at the time of the database’s discovery. According to researchers at NordLocker, the culprit is a stealthy, unnamed malware that spread via trojanized Adobe Photoshop versions, pirated games and Windows cracking tools, between 2018 and 2020. It’s unlikely that the operators had any depth of skill to pull off their data-harvesting campaign, they added. “The truth is, anyone can get their hands on custom malware. It’s cheap, customizable, and can be found all over the web,” the firm said in a Wednesday posting. “Dark Web ads for these viruses uncover even more truth about this market. For instance, anyone can get their own custom malware and even lessons on how to use the stolen data for as little as $100. And custom does mean custom – advertisers promise that they can build a virus to attack virtually any app the buyer needs.”


Get your technology infrastructure ready for the Age of Uncertainty

As I say, it’s by no means clear what happens next and how ingrained changes will be. It’s plausible, of course, that we largely go back to old habits although that seems unlikely with a groundswell of employees having become accustomed to a different way of life and a different way of working. And it’s worth noting that even evidence of a return to ancient ways of living in the form of crafts, baking and so on are now very much digitally imbued activities. We download apps, consult websites and share ideas on forums when we try out a new recipe, and this sort of binary activity is part of the fabric of life because it is faster, more convenient and more scalable than the older alternatives. But what we need to do is strike the perfect balance between technology-enabled agility and what we want to do with our time. What we will need to manage through change is clear though. Adaptivity, enabled by robust, data-centric digital business designs, will become the watchword of operations. In other words, companies will need to be able to move fast, whatever happens, changing operating models, moving into adjacent markets and generally taking nothing for granted. In the new Age of Uncertainty, legacy systems have to be reassessed in the context of how best to build for agility.



Quote for the day:

"To have long term success as a coach or in any position of leadership, you have to be obsessed in some way." -- Pat Riley

Daily Tech Digest - June 10, 2021

Tracing: Why Logs Aren’t Enough to Debug Your Microservices

Traces complement logs. While logs provide information about what happened inside the service, distributed tracing tells you what happened between services/components and their relationships. This is extremely important for microservices, where many issues are caused due to the failed integration between components. Also, logs are a manual developer tool and can be used for any level of activity – a specific low-level detail, or a high-level action. This is also why there are many logging best practices available for developers to learn from. On the other hand, traces are generated automatically, providing the most complete understanding of the architecture. Distributed tracing is tracing that is adapted to a microservices architecture. Distributed tracing is designed to enable request tracking across autonomous services and modules, providing observability into cloud native systems. ... Distributed tracing provides observability and a clear picture of the services. This improves productivity because it enables developers to spend less time trying to locate errors and debugging them, as the answers are more clearly presented to them.


How SAML 2.0 Authentication Works and Why It Matters

At its core, Security Assertion Markup Language (SAML) 2.0 is a means to exchange authorization and authentication information between services. SAML is frequently used to implement internal corporate single sign-on (SSO) solutions where the user logs into a service that acts as the single source of identity which then grants access to a subset of other internal services. ... Generally, SAML authentication solves three important problems: SAML offers a significant improvement to user experience. Users only have to remember their credentials with a single identity provider and not having to worry about usernames and passwords for every application they use; SAML allows application developers to outsource identity management and authentication implementation to external providers without implementing it themselves; and And perhaps most importantly, SAML dramatically reduces the operational overhead of managing access within an organization. If an employee leaves or transfers to another team, their access will be automatically revoked or downgraded across all applications connected to the identity provider.


How to build Data Science capabilities within an organization

Signing up for a data science program is half the battle won. But only a strong, steady commitment and effort will take it to completion and yield amazing results. You as an organization may be clear on the ‘why’ of the whole endeavor. You know that more self-sufficiency and expertise will bring in more revenue. But without communicating the benefits learning data science has for your employees, you are unlikely to see genuine involvement. You can encourage buy-in from employees by showcasing the future career path, rewards of upskilling, higher payouts for working on advanced projects, or even the fear of being left out( I hate to say this but this is how the cookie crumbles). Of course, the seniority in your organization needs to weigh the pros & cons of such a transformation and accordingly roll out the mandate to selected groups as there may be employees who may not be sold to the idea of building the skills required for data science at all. ... A great deal of time, energy, and effort is saved by a wide variety of platforms that provide a bunch of tools and services for data science monitoring. They track and test the employee's progress during the data science program. This can keep your employees on their toes. 


New identities are creating opportunities for attackers across the enterprise

The adoption of cloud services, third parties, and remote access has dissolved the traditional network perimeter and made security a far more complex equation than before. Identity security is quickly emerging become the primary line of defence for most organisations, because it allows security teams to tailor each user’s access proportionately based on the needs of their job role. Underpinning this model is Zero-Trust – the practice of treating all accounts with the same minimal level of access until authenticated. In cloud environments, for example, any human or machine identity can be configured with thousands of permissions to access cloud workloads containing critical information. User, group, and role identities are typically assigned permissions depending on their job functions. Providing each identity with its own unique permissions allows users to access what they need, when they need it, without putting company assets at risk of breach. In combination with Zero-Trust, it ensures each identity is only able to gain that access once it is authenticated. The increasing recognition of Zero-Trust as security best practice has led its stock to rise significantly, so much so that 88% of those we researched categorised it as either ‘important’ or ‘very important’ in tackling today’s advanced threats. 


What to Know About Updates to the PCI Secure Software Standard

The PCI Council made several clarifications to controls within the standard, added additional guidance to a couple of sections, and added its new module specific to Terminal Software Requirements, which applies to software intended for deployment and execution on payment terminals. Specific to the new module of the Secure Software Standard, Module B, Terminal Software Requirements focus on software intended for deployment and execution on payment terminals or PCI-approved PIN Transaction Security (PTS) point-of-interaction (POI) devices. In total, the new section adds 50 controls covering five control objectives. ... Similar to Terminal Software Attack Mitigation, Terminal Software Security Testing clearly calls out the need to ensure software is "rigorously" tested for vulnerabilities prior to each release. The software developer is expected to have a documented process that is followed to test software for vulnerabilities prior to every update or release. The control tests in this objective continue to highlight secure software development best practices – testing for unnecessary ports or protocols, identifying unsecure transmissions of account data, identification of default credentials, hard-coded authentication credentials, test accounts or data, and/or ineffective software security controls.


Reawakening Agile with OKRs?

The approach I found works best is to lead with OKRs - what the team want to do. So throw your backlog away and adopt a just-in-time requirements approach. Stop seeing "more work than we have money and time to do" as a sign of failure and see it is a sign of success. Every time you need to plan work return to your OKRs and ask: What can we do now, in the time we have, to move closer to our OKRs? Stop worrying about burning-down the backlog and put purpose first, remember why the team exists, ask Right here, right now, how can we add value? Used in a traditional MBO-style one might expect top managers to set OKRs which then cascade down the company with each team being given their own small part to undertake. That would require a Gosplan style planning department and would rob teams of autonomy and real decision making power. (Gosplan was the agency responsible for creating 5-year plans in the USSR, and everyone knows how that story ended.) Instead, leaders should lead. Leaders should stick to big things. They should emphasise the purpose and mission of the organization, they should set large goals for the whole organization but those goals should not be specific for teams.


Intel Plugs 29 Holes in CPUs, Bluetooth, Security

Several of the 29 vulnerabilities are rated as high-severity – including four local privilege escalation vulnerabilities in firmware for Intel’s CPU products; another local privilege escalation vulnerability in Intel Virtualization Technology for Directed I/O (VT-d); a network-exploitable privilege escalation vulnerability in the Intel Security Library; another locally exploitable privilege escalation in the NUC family of computers; yet more in its Driver and Support Assistant (DSA) software and RealSense ID platform; and a denial-of-service (DoS) vulnerability in selected Thunderbolt controllers. ... “Interestingly, it’s in the firmware that controls the CPUs, not in the host operating system,” he continued. “We’re used to automatically applying updates for operating systems and software products – and even then we still occasionally see updates that result in the dreaded blue screen of death.” Applying firmware updates is not as well-managed as software updates, he noted, likely because they’re tougher to test … which means they pack more inherent risk.


How to Deploy Emotional Intelligence for Work Success

“In simplest terms, empathy is putting yourself in someone else’s shoes,” writes Denna Ritchie in a Calendar article. Possessing this is arguably the most important leadership skill. After all, being empathetic is the foundation when building and fortifying social connections. What’s more, it can create a more loyal, engaged, and productive team. As if that weren’t enough, empathy increases happiness, teaches presence and fosters innovation collaboration. ... Speaking of vulnerability, psychologist Nick Wignall defines it as “the willingness to acknowledge your emotions -- especially painful ones.” He clarifies “that when we talk about vulnerability, we’re usually referring to emotional vulnerability. When your best friend suggests that you should work on being more vulnerable in your relationship, they’re probably not talking about making yourself more physically vulnerable.” In short, vulnerability is all about emotions. In particular, difficult emotions like anxiety, frustration, and shame. The other part of the equation is acknowledging these negative emotions and knowing how to address them.


Becoming a Self-Taught Cybersecurity Pro

If you are looking to take your IT career in a new direction where there's loads of demand, there are several interesting subspecialities, and the pay continues to increase, a career in cybersecurity can't be beat right now. It's impossible to ignore all the high-profile attacks -- from the SolarWinds supply chain attack impacting multiple government agencies, to the more recent spate ransomware attacks against gas pipeline company Colonial Pipeline and meat producer JBS, to name a few. The move to work from home and to accelerate digital transformations has only increased the alert level and the demand for cybersecurity pros. "In cybersecurity right now there's a significant shortage of candidates," said Ariel Weintrab, chief information security officer at Mass Mutual. Her cybersecurity team is hiring from general IT pros and also "recruiting from a wide variety of educational backgrounds," not just technology. Her organization is looking for problem solvers with intellectual creativity. But if you just show up at the hiring office with your liberal arts degree or your cybersecurity certification, how do you stand out from the crowd of other applicants interested in cybersecurity? 


The 6 steps to implementing zero trust

When looking for short term wins in pursuit of a long-term goal, businesses should look to target a single or a collection of applications that would most benefit from adopting a zero trust security model – critical applications that key decision makers are more aware of, which will help demonstrate the return on investment (ROI) along the way. Companies also need to understand that this is a learning process, and thus need to be comfortable in adapting their approach as they learn more about what they are trying to protect. Adopting zero trust means businesses will be re-positioning the usual access models, and this may require solicitation and education of stakeholders. Part of the process however is understanding these dependencies and catering for them in the program. ... The overall aim for businesses is to make quick and measurable progression, so choosing to address a number of areas would be counterproductive. Just like how a business would take a very focused approach when identifying what applications to protect at this stage, they should apply a similar attitude when determining how to approach zero trust itself.



Quote for the day:

"The test we must set for ourselves is not to march alone but to march in such a way that others will wish to join us." -- Hubert Humphrey

Daily Tech Digest - June 09, 2021

Avoiding ransomware: what security & risk leaders need to know

First, organisations need to determine the OEM provider’s approach to secure product management, from ideation to end of life. Determining this from the onset will help CIOs understand the core competencies of a product security officer, enabling them to cultivate the skills that are needed to productise security features, including product roadmap, planning and lifecycle management. Second, a focus on an integrated digital security approach, which looks holistically across IT and data, product, and operations-related technology, is needed. Currently, too many companies fail to see convergence, leaving key features at risk of being hacked – easily. Companies must look at their supplier risk. Supplier risk has, traditionally, focused on the data and IT infrastructure security of the supply chain, usually missing crucial elements, like product security, which needs to be factored in for a better securitisation. More importantly, some supply chain leaders are still using old vendor risk policies with OEMs that have increasingly become more digital, compromising the security of new products and devices – and once again leaving the window ajar for hackers to jump in.


For CISOs and artificial intelligence to evolve, trust is a must

With concerns rising from consumers and citizens and the increasing need for more ethics and trust, we need to put limits to ensure sound and fair use of AI technologies. The new EU Artificial Intelligence Act is beneficial because it will dictate the rules and force companies to examine the societal implications of rapid technology adoption. We must find a balance between technology benefits and risks. With the emergence of AI-enabled applications, traditional surveillance is transforming into smart video with new use cases that transcend what we consider surveillance today. Unfortunately, under the pretext of protection, camera operators risk exposing everyone within sight. We tend to overlook what data is collected or if it is secure for the greater good. Any technology use and innovation must be transparent and explainable. In 2020, amidst the COVID-19 disruption, France launched its contact tracing application, but its adoption was incredibly low because most citizens questioned the technology used and how the data was collected and stored. It forced the French government to rethink its approach and launch a new, “enriched” version of the application.


The Creepy Side Of Emotion Recognition Technology

AI experts say emotion recognition systems are based on the assumption that humans manifest emotions in similar ways. Something as simple as a raised eyebrow may have different meanings in different cultures. Luke Stark, assistant professor in the faculty of information and media studies at the University of Western Ontario, said in an interview, “Emotions are simultaneously made up of physiological, mental, psychological, cultural, and individually subjective phenomenological components. No single measurable element of an emotional response is ever going to tell you the whole story. Philosopher Jesse Prinz calls this “the problem of parts.” In a recent essay for Nature, Professor Kate Crawford said many such algorithms are based on psychologist Paul Ekman’s study, conducted in 1960s, on nonverbal behaviour. According to him, there are six basic emotions– happiness, sadness, fear, anger, surprise and disgust. Ekman’s work and ideas have formed the basis for emotion-detection technologies used by giants such as Microsoft, IBM, and Amazon.


Three Critical Success Factors for Master Data Management

The biggest danger to a nascent MDM program is starting with the wrong objectives, even though those objectives can often sound quite right. The best practice here is to start with discrete and measurable business outcomes. A key acid test in this scenario is the ability to describe the outcomes of MDM in nontechnical terms that the business can understand and champion, both before and after they are delivered. If you can’t do this, then you likely have the wrong objective! ... My experience has shown that the vast majority of enterprises stumble at this point, but it’s a great method to get IT teams to see the issue that they are eventually going to have in maintaining momentum over the life of the MDM program. It is also helpful to consider business outcomes as divided along two axes as shown below: those that make money vs. those that save money, and current but sub-optimal vs. net new business processes. While most IT teams are capable of solving those use cases in the lower left quadrant on their own, true digital transformation resides in the upper right quadrant, and requires full participation from the business in identifying, describing, and quantifying these outcomes.


CSPM explained: Filling the gaps in cloud security

The issue for all cloud-based technologies is that they inherently lack a perimeter. This means that while you can have some protection, no simple method can determine which processes or persons are supposed to have access and keep out those who don’t have access rights. You need a combination of protective measures to ensure this. The other challenge is that manual processes can’t keep up with scaling, containers, and APIs. This is the whole point why what is now called infrastructure as code has caught on, in which infrastructure is managed and provisioned by machine-readable definition files. These files depend on an API-driven approach. This approach is integral to cloud-first environments because it makes it easy to change the infrastructure on the fly, but also makes it easy to create misconfigurations that leave the environment open to vulnerabilities. Speaking of containers, it is also hard to track them across the numerous cloud offerings that are available. Amazon Web services (AWS) alone has its Elastic Container Service, its serverless compute engine Fargate ...


Tough regulations are coming for the cryptocurrency sector

The cryptocurrency sector needs an international framework that regulates it. This could be introduced to restrict its usage in all countries. At the moment, countries have a disjointed approach to regulating this sector – if they are even regulating it at all. Some countries such as Japan passed regulations in favor of cryptocurrencies, recognizing them as legal property, and the sector is under the entire supervision of the Financial Services Agency. Other countries like India are looking to ban this sector; in March 2021, the Indian government was due to introduce a digital currency bill that would have made cryptocurrencies illegal in the country. China is furthering its restrictions by prohibiting financial institutions from engaging in related transactions. The decision to restrict or ban the use of cryptocurrencies by countries is an attempt to limit the influence that the sector can have on the world economy, as they wouldn’t want to surrender the control of their economy to a decentralized currency. In the UK, the Bank of England released a discussion paper in which it explains that stablecoins should expected the same regulations as fiat currencies, in this report it also mentions it is exploring the potential introduction of its own digital currency, the “Britcoin”.


What Makes Quantum Computing So Hard to Explain?

Let’s start with quantum mechanics. (What could be deeper?) The concept of superposition is infamously hard to render in everyday words. So, not surprisingly, many writers opt for an easy way out: They say that superposition means “both at once,” so that a quantum bit, or qubit, is just a bit that can be “both 0 and 1 at the same time,” while a classical bit can be only one or the other. They go on to say that a quantum computer would achieve its speed by using qubits to try all possible solutions in superposition — that is, at the same time, or in parallel. This is what I’ve come to think of as the fundamental misstep of quantum computing popularization, the one that leads to all the rest. From here it’s just a short hop to quantum computers quickly solving something like the traveling salesperson problem by trying all possible answers at once — something almost all experts believe they won’t be able to do. The thing is, for a computer to be useful, at some point you need to look at it and read an output. But if you look at an equal superposition of all possible answers, the rules of quantum mechanics say you’ll just see and read a random answer. And if that’s all you wanted, you could’ve picked one yourself.


The Future Of Crypto And Blockchain: Fintech 50 2021

One notable graduate of the list is Coinbase, the largest cryptocurrency exchange in the United States, which shook the industry and public markets with its April 14 Nasdaq debut – the largest direct listing in history. At one point during the opening day, Coinbase’s market cap exceeded $100 billion, setting a high bar for crypto startups still eyeing a public offering. Two of this year’s members, Kraken and Gemini (also cryptocurrency exchanges), have discussed going public in the future. But fintech is no longer just a tale of corporate success. Cryptocurrency lenders and exchanges are slowly giving way to the new hot shot of the class – decentralized finance (DeFi). An umbrella term for blockchain-based applications and protocols aiming to replace traditional financial intermediaries like banks and brokerages, DeFi skyrocketed in popularity and market capitalization over the past 12 months – from just over $1 billion in locked value in June 2020 to the current $67.9 billion. The largest among DeFi platforms are lending and borrowing protocols, such as Aave and MakerDAO, and decentralized exchanges like Uniswap and SushiSwap – all built on Ethereum.


Google Hopes AI Can Turn Search Into a Conversation

In the “Rethinking Search” paper, the Google researchers call indexing the workhorse of modern search. But they envision doing away with indexing by using ever-larger language models that can understand more queries. The Knowledge Graph, for example, may serve up answers to factual questions, but it’s trained on only a small portion of the web. Using a language model built from more of the web would allow a search engine to make recommendations, retrieve documents, answer questions, and accomplish a wide range of tasks. The authors of the Rethinking Search paper say the approach has the potential to create a “transformational shift in thinking.” Such a model doesn’t exist. In fact the authors say it may require the creation of artificial general intelligence or advances in fields like information retrieval and machine learning. Among other things, they want the new approach to supply authoritative answers from a diversity of perspectives, clearly reveal its sources, and operate without bias. A Google spokesperson described LaMDA and MUM as part of Google’s research into next-generation language models and said internal pilots are underway for MUM to help people with queries on billions of topics.


Building Reliable Software Systems with Chaos Engineering

Complex systems are inevitable. That’s the short answer, but we can expand on that a bit. As humans, we deal with complexity every day but the way we deal with it is to make mental models or abstractions about the complexity. In everyday life we deal with other complex systems such as automobile traffic, interaction with other people and animals, or even at a societal level. Decades of IT work has focused on making system models simple (e.g. the three tier web app) and that works great, when it is possible. For better or worse, the situations where that is possible are diminishing. We are entering a world where most and eventually nearly all software systems will be complex. What do we mean by “complex?” In this case we mean that a system is complex if it is too large, and has too many moving parts, for any single human to mentally model the system with predictive accuracy. Twenty years ago, I could write a content management system and basically understand all of the working parts. I could tell you, roughly, what a change to the performance of a query would have on the overall performance of the rest of the application, without having to actually try it. That is no longer the case.



Quote for the day:

"Leadership without character is unthinkable - or should be." -- Warren Bennis

Daily Tech Digest - June 08, 2021

DeepMind scientists: Reinforcement learning is enough for general AI

A different approach to creating AI, proposed by the DeepMind researchers, is to recreate the simple yet effective rule that has given rise to natural intelligence. “[We] consider an alternative hypothesis: that the generic objective of maximising reward is enough to drive behaviour that exhibits most if not all abilities that are studied in natural and artificial intelligence,” the researchers write. This is basically how nature works. As far as science is concerned, there has been no top-down intelligent design in the complex organisms that we see around us. Billions of years of natural selection and random variation have filtered lifeforms for their fitness to survive and reproduce. Living beings that were better equipped to handle the challenges and situations in their environments managed to survive and reproduce. The rest were eliminated. This simple yet efficient mechanism has led to the evolution of living beings with all kinds of skills and abilities to perceive, navigate, modify their environments, and communicate among themselves. 


How To Become A Machine Learning Engineer?

New roles such as machine learning architecture are being created today. As the platform gets bigger, the machine learning engineer should handle the entire architecture and evolve to meet the needs of the data science, machine learning and data analytics organisations. The most important aspect of an ML engineer is the focus on production and model deployment — not just code that works, but code that functions in the real world, alongside understanding industry best practices to successfully integrate and deploy machine learning models. For starters, having a computer science, robotics, engineering and physics degree, along with competencies in C, C++, Java, Python, R, Scala, Julia, and other enterprise languages, helps. Plus, a stronger understanding of databases adds weightage. At experience levels, software engineers, software developers, and software architects are cut out for machine learning engineering roles. “It is almost a straight line from cloud architect to ML engineer and ML architect, as these two roles have so much overlap. If you understand data science and machine learning, you can understand models,” said Vashishta.


.NET Ranks High in Coding Bootcamp Report

Microsoft's .NET development framework ranked high in recent research about coding bootcamps, or "immersive technology education." With an average cost of about $14,000, these accelerated learning programs can last from six to 28 weeks -- averaging about 14 -- and promise to advance careers in both technical chops and bottom-line salary increases. Course Report studies the industry and presents its findings in annual reports that can help coders pick the best option, among some 500 around the world, with the choice of programming language being a primary factor. "Coding bootcamps employ teaching languages to introduce students to the world of programming," Course Report said in its latest study: Coding Bootcamps in 2021, an update of a 2020 report. "While language shouldn't be the main deciding factor when choosing a bootcamp, students may have specific career goals that guide them towards a particular language. In that case, first decide whether you'd prefer to learn web or mobile development. For the web, your main choices are Ruby, Python, LAMP stack, MEAN stack and .NET languages.


Amazon Sidewalk starts sharing your WiFi tomorrow, thanks

Amazon Sidewalk will create a mesh network between smart devices that are located near one another in a neighborhood. Through the network, if, for instance, a home WiFi network shuts down, the Amazon smart devices connected to that home network will still be able to function, as they will be borrowing internet connectivity from neighboring products. Data transfer between homes will be capped, and the data communicated through Amazon Sidewalk will be encrypted. Amazon smart device owners will automatically be enrolled into Amazon Sidewalk, but they can opt out before a June 8 deadline. That deadline has irked many cybersecurity and digital rights experts, as Amazon Sidewalk itself was not unveiled until June 1—just one week before a mass rollout. Jon Callas, director of technology projects at Electronic Frontier Foundation, told the news outlet ThreatPost that he did not even know about Amazon’s white paper on the privacy and security protocols of Sidewalk until a reporter emailed him about it. “They dropped this on us,” Callas said in speaking to ThreatPost. “They gave us seven days to opt out.”


Researchers Discover a Molecule Critical to Functional Brain Rejuvenation

Recent studies suggest that new brain cells are being formed every day in response to injury, physical exercise, and mental stimulation. Glial cells, and in particular the ones called oligodendrocyte progenitors, are highly responsive to external signals and injuries. They can detect changes in the nervous system and form new myelin, which wraps around nerves and provides metabolic support and accurate transmission of electrical signals. As we age, however, less myelin is formed in response to external signals, and this progressive decline has been linked to the age-related cognitive and motor deficits detected in older people in the general population. Impaired myelin formation also has been reported in older individuals with neurodegenerative diseases such as Multiple Sclerosis or Alzheimer’s and identified as one of the causes of their progressive clinical deterioration. ... The discovery also could have important implications for molecular rejuvenation of aging brains in healthy individuals, said the researchers. Future studies aimed at increasing TET1 levels in older mice are underway to define whether the molecule could rescue new myelin formation and favor proper neuro-glial communication.


Fujifilm refuses to pay ransomware demand, restores network from backups

Jake Moore, cybersecurity specialist at internet security firm ESET, said refusing to pay a ransom is “not a decision to be taken lightly.” Ransomware gangs often threaten to leak or sell sensitive data if payment is not made. However, Fujifilm Europe said it is “highly confident that no loss, destruction, alteration, unauthorised use or disclosure of our data, or our customers’ data, on Fujifilm Europe’s systems has been detected.” The spokesperson added: “From a European perspective, we have determined that there is no related risk to our network, servers and equipment in the EMEA region or that of our customers across EMEA. We presently have no indication that any of our regional systems have been compromised, including those involving customer data.” It is not clear if the ransomware gang stole Fujifilm data from the affected network in Japan. Fujifilm declined to comment when asked if those responsible had threatened to publish data if the ransom is not paid. According to security news site Bleeping Computer, Fujifilm was infected with the Qbot trojan last month.


Fixing Risk Sharing With Observability

The challenge is that one party, the developers, has more information than other parties. That information asymmetry is what creates unbalanced risk sharing. Coping with information asymmetry has led to all kinds of new collaborative models, starting with DevOps and evolving into DevSecOps and other permutations like BizDevSecOps. True collaboration has been hard to come by. Early DevOps efforts are often successful, but scaling beyond five to seven teams is difficult because teams lack the breadth of experience in IT operations or the SRE capacity to staff multiple product teams. The change velocity DevOps teams can achieve is often far greater than SREs and SecOps can absorb, making information asymmetry worse. If teams can’t maintain high levels of collaboration and communication, another option must be developed. Observability practices, like collecting all events, metrics, traces and logs, allow SREs and SecOps teams to interrogate applications about their behavior without knowing which questions they want to ask ahead of time. However, observability only works if applications, and the infrastructure they rely on, are instrumented.


How to Structure a Digital Transformation Project Team

The first role on a project team and arguably the most important role on a product team is your executive steering committee. This is typically a cross-functional group of executives within your organization that are responsible for setting the vision for the overall transformation. They’re responsible for approving scope changes, or any sort of material changes to the project plan or the budget. They’re ultimately responsible for setting the tone and the vision for the overall future state. The question to ponder on here is, how do we want our operation model to look and what do we want our organization to look like in the future? For lack of a better word, what do we want to be when we grow up? Before I get into the rest of the project team, something that’s very important even before we talk about other team roles is who should fill these other roles. The first thing you want to do is make sure that the steering committee is aligned on the overall transformation, vision, strategy, and objectives. If you start filling out the project team prematurely, when you don’t have that alignment 


Windows Container Malware Targets Kubernetes Clusters

After it compromises web servers, Siloscape uses container escape tactics to achieve code execution on the Kubernetes node. Prizmant said that Siloscape’s heavy use of obfuscation made it a chore to reverse-engineer. “There are almost no readable strings in the entire binary. While the obfuscation logic itself isn’t complicated, it made reversing this binary frustrating,” he explained. The malware obfuscates functions and module names – including simple APIs – and only deobfuscates them at runtime. Instead of just calling the functions, Siloscape “made the effort to use the Native API (NTAPI) version of the same function,” he said. “The end result is malware that is very difficult to detect with static analysis tools and frustrating to reverse engineer.” “Siloscape is being compiled uniquely for each new attack, using a unique pair of keys,” Prizmant continued. “The hardcoded key makes each binary a little bit different than the rest, which explains why I couldn’t find its hash anywhere. It also makes it impossible to detect Siloscape by hash alone.”


Machine learning at the edge: TinyML is getting big

Whether it's stand-alone IoT sensors, devices of all kinds, drones, or autonomous vehicles, there's one thing in common. Increasingly, data generated at the edge are used to feed applications powered by machine learning models. There's just one problem: machine learning models were never designed to be deployed at the edge. Not until now, at least. Enter TinyML. Tiny machine learning (TinyML) is broadly defined as a fast growing field of machine learning technologies and applications including hardware, algorithms and software capable of performing on-device sensor data analytics at extremely low power, typically in the mW range and below, and hence enabling a variety of always-on use-cases and targeting battery operated devices. ... First, the working definition of what constitutes TinyML was, and to some extent still is, debated. What matters is how devices can be deployed in the field and how they're going to perform, said Gousev. That will be different depending on the device and the use case, but the point is being always on and not having to change batteries every week. That can only happen in the mW range and below.



Quote for the day:

"If you're relying on luck, you have already given up." -- Gordon Tredgold

Daily Tech Digest - June 07, 2021

Why data storage isn’t a ‘one size fits all’ solution for IoT

Certain storage solutions are designed with the properties of endurance and resilience at the forefront. These offerings, including the highly reliable and industrial-grade e.MMC (embedded Multimedia Card) and UFS (Universal Flash Storage) embedded flash drives, can endure harsh environments, including those with extreme temperatures or vibrations, such as in a factory setting. One common use case that requires such solutions is in industrial-use drones. These drones, for example, are used by oil-rig workers to complete inspections more quickly and without risking worker safety. Similarly, search and rescue drones require high-performance in varying environments, such as those with fluctuating and extreme weather patterns. One way to achieve low latency is to bring compute and storage nearer to the place it is used, like the network edge, or to devices closer to the edge. This helps enable rapid real-time data transfers and analysis at the edge, where low latency is a fundamental requirement. Smart cities, for example, use and act on real-time data. Emergency services can communicate with traffic lights to synchronise and provide quicker and more direct access to critical locations whilst holding traffic at bay.


The future of storage resides at the intersection of the edge and cloud

Firstly, the digital leaders of the future can’t be built on the technology approaches of the past – IT needs to evolve to provide a technology foundation that accelerates digital innovation. Today’s storage infrastructure technology is designed to make hybrid cloud environments and data produced at the edge easier to deploy and manage. These purpose-built suites of solutions have evolved to fill an essential role in the data centre, providing ever-expanding levels of performance, capacity and resiliency for mission-critical workloads. Modern storage architecture is helping businesses succeed, by not only supporting current business needs, but also allowing scale to evolve IT infrastructure as business dynamics change. Therefore, organisations must refresh their storage infrastructure on a regular basis and keep up with the increased data demands by eliminating ageing infrastructure that is more susceptible to failures that cause outages/downtime. Modern storage infrastructure also frequently includes advanced data protection features that help ensure the on-premises data remains safe and secure. 


Artificial Intelligence: The Evolution Of Neural Networks

Artificial neural networks along with machine learning and artificial intelligence can flawlessly predict severe illnesses. For example, the output of waves of an ECG can be analyzed to understand a patient’s heart and predict heart attacks well in time. Similarly, with an adequate amount of data, dementia can be identified in the early stages by understanding and analyzing EEG patterns. Along with diagnosis, artificial neural networks and machine learning can work together for discovering drugs for the treatment of multiple serious illnesses. Furthermore, the introduction of autonomous cars has the potential of reducing traffic jams and accidents. Neural networks can be extensively used for predicting natural calamities like earthquakes, floods, and volcanic eruptions. Data like seismographs and atmospheric pressure can be collected on a daily basis to analyze and predict the occurrence of natural calamities. Additionally, neural networks can effectively predict changes in the weather and the climate. The future of artificial neural networks hints that chatbots are impacting the retail industry tremendously. 


Realistic Patch Management Tips, Post-SolarWinds

Security hygiene, including patching, is an essential part of defense, says Pironti. Nevertheless, he says, "We're fooling ourselves if we think we can defend ourselves against a nation-state attack [like the SolarWinds incident] while continuing to release code at the speed we do." Curtis Franklin, senior analyst of enterprise security management at Omdia, says companies must have patch management technology to help automate the process now, "because it's gotten really beyond human-scale at this point." Despite the recent high-profile example of a malicious software update, Pironti says companies should not shy away from deploying updates. "I think we would be doing ourselves a disservice if we started distrusting patches," he says. "I'd rather trust my vendors than question them when there's an exploit in the wild." He does, however, say it's fair to ask for better security hygiene in the software development lifecycle. "We've been trained as a society to accept flawed code," says Pironti. 


Saying goodbye to Internet Explorer might be more complicated than you realise

It's going to be odd to see IE go, as it's been part of Windows' internals for almost as long as it's been around, its Trident engine powering tools like Outlook's browser view and Windows' Help system. Even on systems that have the new Edge set as default, opening an email from Outlook in browser view opens it in Internet Explorer. That's because Outlook uses a technique that encapsulates HTML and any image resources in a single file. MHTML, "MIME encapsulation of aggregate HTML documents", was designed for a world where web pages delivered interactivity with applets or ActiveX controls or Flash, and where designers wanted that dynamic content to be part of an email message. It's a useful tool for building formatted emails, using familiar HTML authoring tools, but bundling all the necessary resources in a single archive that's attached to a message. It's an old technique, but one that's still in use. And with IE about to disappear, can you view those messages in a modern browser like Edge? The answer to that question is complicated. If you set the file associations in Windows 10 to support Outlook's MHTML, emails will open in Edge, but will only display as text and without active links.


How The Indian FinTech Is Using AI

CogNext also has an automated technology platform, Platform X, which provides ‘nimble, configurable, interactive, scalable and cost effective’ solutions for regulatory compliance. Such solutions allow financial institutions to control the risk they undertake and improve results in integrity and transparency. Project X works through a technological framework that enables processing customer data and calculations easily. Another element of CogNext is its AutoML solution which encourages domain and business experts working in financial institutions to use ML and AI to create business values. Teams can use this to develop advanced AI projects without coding or even understanding the underlying ML algorithms. ... Capital Float employs AI technologies along with human insight to facilitate risk assessment and marketing. AI and ML algorithms help the company comprehend the creditworthiness of applicants, allowing them to choose the right type of loans for the applicant. Capital Float also utilised AI models to better target customers in their marketing campaigns. In 2018, they acquired a leading Personal Finance Management App, Walnut, further pushing them into the credit-solutions industry.


Why Good Arguments Make Better Strategy

Many leaders avoid arguing about strategy at all costs. Arguing is equated with fighting and, at best, is considered an unproductive use of people’s time. This is a mistake. Arguing is the best way to do strategy, especially in groups, provided the arguments follow established rules of engagement that are rooted in the principles of deductive logic. Great strategy demands the exchange and vetting of ideas — both in its development and implementation. Listen to Patty McCord, former chief talent officer at Netflix, who asserted, “The main reason the company could continually reinvent itself and thrive, despite so many truly daunting challenges coming at us so fast and furiously, was that we taught people to ask, ‘How do you know that’s true?’ Or my favorite variant, ‘Can you help me understand what leads you to believe that’s true?’” Such questions spawned vigorous internal debates at Netflix that, McCord said, “helped cultivate curiosity and respect and led to invaluable learning both within the team and among functions." Why is debate so powerful? One reason lies in the fallibility of human reasoning. 


Making the Move to a SaaS Usage-Based Model

Many of those enterprise companies are working hard to add a subscription element to their business. Yet, others lack a strong imperative to move away from the on-premise model, either because their customers are satisfied with their current arrangements or IT imperatives prohibit it. The reality, though, is that the subscription model is quickly becoming a business necessity. A recent CIBC World Markets study found that, on an annual basis, SaaS stocks outperformed the mature software names, with an average stock price return of 83% vs. an average year-to-date mature software return of 22%. SaaS providers enjoy higher valuations because subscription earnings are more predictable, and companies that offer them can generate more revenue over the long haul. Eventually, many enterprise companies will also offer their products on a pure consumption or per-usage basis so customers can try new products for a very low cost (or even free) and expand usage as their needs grow, though usage-based consumption is still in the early stages. Moving to SaaS is not a “flip the switch” exercise. It requires a total shift in how management thinks, operates and compensates, and everyone -- from sales 


Microsoft’s Kate Crawford: ‘AI is neither artificial nor intelligent’

Bias is too narrow a term for the sorts of problems we’re talking about. Time and again, we see these systems producing errors – women offered less credit by credit-worthiness algorithms, black faces mislabelled – and the response has been: “We just need more data.” But I’ve tried to look at these deeper logics of classification and you start to see forms of discrimination, not just when systems are applied, but in how they are built and trained to see the world. Training datasets used for machine learning software that casually categorise people into just one of two genders; that label people according to their skin colour into one of five racial categories, and which attempt, based on how people look, to assign moral or ethical character. The idea that you can make these determinations based on appearance has a dark past and unfortunately the politics of classification has become baked into the substrates of AI. ... Ethics are necessary, but not sufficient. More helpful are questions such as, who benefits and who is harmed by this AI system? And does it put power in the hands of the already powerful? 


The Best First Jobs for People Interested in Entrepreneurship

We are often told to not do "what everyone else is doing." But when it comes to startups, the crowd has a certain wisdom. Traction and exposure beget more traction and exposure, so it's not a bad idea to pay attention to what are currently considered hot startups. Read up on top startup listicles like those on LinkedIn or AngelList. Be agnostic as to what kind of job you can land at these hot companies. Jobs at rapidly growing companies evolve quickly, and titles mean little. Jump in, gain experience and make an impact. Please pay special attention to the quality of their funding sources, as this is an essential indicator of their stability and future financing availability. ... Understanding people's needs and learning how to address them truly is the foundational learning of most successful entrepreneurs. Sales positions involve pitching a product, which is helpful, but more importantly, they give you exposure to lots of external people. Especially when you are young and coming out of school, sales roles are great at quickly taking you out of your comfort zone and forcing you to provide value to real people. 



Quote for the day:

"Many men may see the King in a Kid but it takes a true leader to nurture it" -- Bernard Kelvin Clive

Daily Tech Digest - June 06, 2021

The computer will see you now: is your therapy session about to be automated?

AI research has not improved significantly since that review, she argues. “Based on the available evidence, I’m not optimistic.” Yet she added that a personalized approach could work better. Rather than assuming a bedrock of emotional states that are universally recognizable, algorithms could be trained on a single person over many sessions, including their facial expressions, their voice and physiological measures like their heart rate, while accounting for the context of those data. Then you’d have better chances of developing reliable AI for that person, Barrett says. If such AI systems eventually can be made more effective, ethical issues still have to be addressed. In a newly published paper, Torous, Depp and others argue that, while AI has the potential to help identify mental problems more objectively, and it could even empower patients in their own treatment, first it must address issues like bias. During the training of some AI programs, when they are fed huge databases of personal information so they can learn to discern patterns in them, white people, men, higher-income people, or younger people are often overrepresented. As a result they might misinterpret unique facial features or a rare dialect.


WhatsApp Just Gave 2 Billion Users A Reason To Stay

The specter of regulation continues to hang over Facebook and its Big Tech rivals, but this has raised a different regulatory question: At what point does a privately held communication platform become a utility. Social media can be turned on or off with little consequence. But replacing regulated mobile networks with a multinational “over the top” that is used by almost everyone is a different deal. WhatsApp’s biggest victory—the reason it’s now on almost all our phones—was its displacement of SMS as the world’s most popular, most ubiquitous, messaging tool. The nearest equivalent is Apple’s iMessage in some markets, especially the U.S. But iMessage isn’t a separate platform from core, regulated messaging. And, more to the point, it’s owned by a product giant not a data-based advertising giant. WhatsApp’s numbers are interesting. While its penetration in Europe is strong, in the developing world it’s staggering. In Kenya, South Africa, Nigeria, Argentina, Malaysia, Colombia and Brazil it has secured more than 90% of total adult internet users. In most countries, WhatsApp is now the market leader. Think that through when next reading about WhatsApp’s shift into payments and shopping.


Insurance to Mitigate the Risk of AI Systems Coming into View

It’s not clear that AI software suppliers guarantee the accuracy of their algorithms, or that insurance companies cover the risks associated with AI products. Having insurance against AI risk could smooth the path to AI adoption. Among manufacturers trying out AI, many are stuck in “pilot purgatory”–not yet successfully scaling digital transformation. “Greater support for businesses looking to implement new solutions could help to improve the adoption rate,” Yoskovitch stated. Insurers could help enterprises at these three stages of AI adoption, Yoskovitch suggests ... AI failure models are an evolving area of research. “It is not possible to provide prescriptive technological mitigations,” the authors stated. Cyber insurance comes the closest, but is not a perfect fit. If bodily harm occurs because of an AI failure, such as if the image recognition system on an autonomous car fails to perform in snow or frost conditions, cyber insurance is not likely to cover the damage, although it may cover the losses from the interruption of business that results, the authors suggest. 


‘Back to human’: Why HR leaders want to focus on people again

Delivering a great employee experience relies on the same principles used in design thinking for products and services. Like skilled designers, CHROs are starting with the customer and working backward. Where there is a customer journey with its associated pain points, so there are career journeys in every big organization, each with its own identifiable moments of frustration. One thing HR leaders can do along these lines is to harness the energy and insight of their colleagues to increase engagement among new hires and current employees. Cisco, for instance, launched a 24-hour “breakathon” with more than 800 employees that used design-thinking principles to identify the moments that matter most in the interactions between HR and employees. This session led to a complete redesign of onboarding: YouBelong@Cisco, a full prototype solution that targeted common pain points for people starting careers at the company. HR leaders want to use these technologies to help customize and track the needs of each individual on the employee journey, whether that means advancing educational efforts, helping customers and clients to solve problems, supporting the development of colleagues, or simply being part of a great team.


Plea To ML Researchers: Give Data Curation A Chance

Many experts believe data must be used in their natural form to give an unvarnished output. While there is no problem with this argument, Rogers said, it needs more elaboration. “In that case, the “natural” distribution may not even be what we want: e.g. if the goal is a question answering system, then the “natural” distribution of questions asked in daily life (with most questions about time and weather) will not be helpful,” wrote Rogers. She further added there is still a lot of research work that needs to be done before developers can study the world as it is. Some developers feel their data is large enough for their training set to encompass the ‘entire data universe’. Rogers said collecting all data is impossible as it will pose legal, ethical, and practical challenges Meanwhile, many are in favour of developing algorithmic alternatives to data curation. As per Rogers, this is a good possibility; however, having such solutions, in the current scenario, could be a complementary approach to data curation rather than completely replacing it. A few experts believe data curation is part of the process and should not become a task big enough to forget the original purpose of developing a model.


Ultra-high-density hard drives made with graphene store ten times more data

Graphene enables two-fold reduction in friction and provides better corrosion and wear than state-of-the-art solutions. In fact, one single graphene layer reduces corrosion by 2.5 times. Cambridge scientists transferred graphene onto hard disks made of iron-platinum as the magnetic recording layer, and tested Heat-Assisted Magnetic Recording (HAMR) – a new technology that enables an increase in storage density by heating the recording layer to high temperatures. Current COCs do not perform at these high temperatures, but graphene does. Thus, graphene, coupled with HAMR, can outperform current HDDs, providing an unprecedented data density, higher than 10 terabytes per square inch. “Demonstrating that graphene can serve as protective coating for conventional hard disk drives and that it is able to withstand HAMR conditions is a very important result. This will further push the development of novel high areal density hard disk drives,” said Dr Anna Ott from the Cambridge Graphene Centre, one of the co-authors of this study. A jump in HDDs’ data density by a factor of ten and a significant reduction in wear rate are critical to achieving more sustainable and durable magnetic data recording.


Implementing An Effective Intelligent Master Data Management Strategy

Since MDM is not a one-time implementation or cleansing exercise, business owners must own the data along with the business processes from various departments and units. The data governance process implemented must identify, measure, capture, and rectify data quality issues in the source system itself. In order to keep the strategy running, a formal model to manage said data as a strategic resource should comprise detailed business rules, data stewardship, data control, and compliance mechanisms. The governance aspect of data needs to be treated as part of daily responsibilities rather than a one-off initiative for it to be effective and supported by stakeholders or senior management. ... Before diving deep into the MDM implementation process, defining a future roadmap is crucial in showing how later stages will be accomplished, consistent with the strategic objectives of an organization. This ensures that your MDM exercise does not turn into a catastrophic event due to abject failures from structural flaws that corrupt your entire data system. Further, infuse upgrades, conduct regular testing on standard communication interfaces, and set benchmarks to quantify your KPI success, until they are proven to be stable before opening up the gates to the rest of your data stream.


Neuromorphic Chip: Artificial Neurons Recognize Biosignals in Real Time

The researchers first designed an algorithm that detects HFOs by simulating the brain’s natural neural network: a tiny so-called spiking neural network (SNN). The second step involved implementing the SNN in a fingernail-sized piece of hardware that receives neural signals by means of electrodes and which, unlike conventional computers, is massively energy efficient. This makes calculations with a very high temporal resolution possible, without relying on the internet or cloud computing. “Our design allows us to recognize spatiotemporal patterns in biological signals in real time,” says Giacomo Indiveri, professor at the Institute for Neuroinformatics of UZH and ETH Zurich. The researchers are now planning to use their findings to create an electronic system that reliably recognizes and monitors HFOs in real time. ... However, this is not the only field where HFO recognition can play an important role. The team’s long-term target is to develop a device for monitoring epilepsy that could be used outside of the hospital and that would make it possible to analyze signals from a large number of electrodes over several weeks or months.


Hardware buyers are scrambling to find chip shortage work-arounds

Because World Insurance runs most of its operations on a private cloud in their own data center, finding the servers they need to expand their operations is an ongoing battle. Before the chip shortage, they would primarily buy white label servers to add capacity. Now, they, like so many others, are sourcing servers from wherever they can find them. Many manufacturers are in the same boat, said Jens Gamperl, CEO of Sourcengine, an online marketplace for electronic components. Gamperl's customers are scrambling to find chips from any source—regardless of whether or not the supplier and its products have been vetted or not. ... To ensure some sort of quality control, manufacturers are asking Sourcengine to perform those functions. Price gouging also is a big issue. Parts that cost pennies pre-pandemic are now going for thousands of times more. "I came across, four weeks or five weeks ago, a situation where a 50 cent part was offered to us for $41," he said. For large businesses, these increased expenses shouldn't have a noticeable impact on the bottom line given the other expenses like travel went to zero, he said.


Hybrid work: How to prepare for the turnover tsunami

Among the multiple factors at play, according to the Prudential Financial survey, are employee concerns about career advancement. ... Additionally, the wide and rapid acceptance of remote work has opened up new job opportunities to work from anywhere. It's a perfect storm for creating some degree of turnover, says Brian Abrahamson, CIO and the associate laboratory director for communications and IT at the U.S. Department of Energy's Pacific Northwest National Lab. "We used to talk about the impacts of fear, uncertainty, and doubt on people. Add to this the impacts of burnout and isolation and you have a recipe for workforce chaos," Roberts says. "A question every CIO should be asking their people managers is, 'Are the recruiters who are trying to poach our people painting a better picture of a future working with their company than we are of ours?'" The time to start addressing anticipated turnover is now. "If you acknowledge that the risk factors affecting the likelihood of increased attrition in the near term are there, the first recommendation I would make is simple: Accept and prepare for it," says Selective Insurance CIO John Bresney.



Quote for the day:

"If you care enough for a result, you will most certainly attain it." -- William James