Daily Tech Digest - June 13, 2021

The race is on for quantum-safe cryptography

Existing encryption systems rely on specific mathematical equations that classical computers aren’t very good at solving — but quantum computers may breeze through them. As a security researcher, Chen is particularly interested in quantum computing’s ability to solve two types of math problems: factoring large numbers and solving discrete logarithms. Pretty much all internet security relies on this math to encrypt information or authenticate users in protocols such as Transport Layer Security. These math problems are simple to perform in one direction, but difficult in reverse, and thus ideal for a cryptographic scheme. “From a classical computer’s point of view, these are hard problems,” says Chen. “However, they are not too hard for quantum computers.” In 1994, the mathematician Peter Shor outlined in a paper how a future quantum computer could solve both the factoring and discrete logarithm problems, but engineers are still struggling to make quantum systems work in practice. While several companies like Google and IBM, along with startups such as IonQ and Xanadu, have built small prototypes, these devices cannot perform consistently, and they have not conclusively completed any useful task beyond what the best conventional computers can achieve.


Lightbend’s Akka Serverless PaaS to Manage Distributed State at Scale

Up to now, serverless technology has not been able to support stateful, high-performance, scalable applications that enterprises are building today, Murdoch said. Examples of such applications include consumer and industrial IoT, factory automation, modern e-commerce, real-time financial services, streaming media, internet-based gaming and SaaS applications. “Stateful approaches to serverless application design will be required to support a wide range of enterprise applications that can’t currently take advantage of it, such as e-commerce, workflows and anything requiring a human action,” said William Fellows, research director for cloud native at 451 Research. “Serverless functions are short-lived and lose any ‘state’ or context information when they execute.” Lightbend, with Akka Serverless, has addressed the challenge of managing distributed state at scale. “The most significant piece of feedback that we’ve been getting from the beta is that one of the key things that we had to do to build this platform was to find a way to be able to make the data be available in memory at runtime automatically, without the developer having to do anything,” Murdoch said


Can We Balance Accuracy and Fairness in Machine Learning?

While challenges like these often sound theoretical, they already affect and shape the work that machine learning engineers and researchers produce. Angela Shi looks at a practical application of this conundrum when she explains the visual representation of bias and variance in bulls-eye diagrams. Taking a few steps back, Federico Bianchi and Dirk Hovy’s article identifies the most pressing issues the authors and their colleagues face in the field of natural learning processing (NLP): “the speed with which models are published and then used in applications can exceed the discovery of their risks and limitations. And as their size grows, it becomes harder to reproduce these models to discover those aspects.” Federico and Dirk’s post stops short of offering concrete solutions—no single paper could—but it underscores the importance of learning, asking the right (and often most difficult) questions, and refusing to accept an untenable status quo. If what inspires you to take action is expanding your knowledge and growing your skill set, we have some great options for you to choose from this week, too.


The secret of making better decisions, faster

While agility might be critical for sporting success, that doesn't mean it's easily achieved. Filippi tells ZDNet he's spent many years building a strong team, with great heads of department who are empowered to make big calls. "Most of the time you trust them to get on with it," he says. "I'm more of an orchestrator – you cannot micromanage a race team because there's just too much going on. The pace and the volume of work being achieved every week is just mind-blowing." Hackland has similar experiences at Williams F1. Employees are empowered to take decisions and their confidence to make those calls in the factory or out on the track is a crucial component of success. "The engineer who's sitting on the pit wall doesn't have to ask the CIO if we should pit," he says. "The decisions that are made all through the organisation don't feed up to one single individual. Everyone is allowed to make decisions up or down the organisation." As well as being empowered to make big calls, Hackland says a no-blame culture is critical to establishing and supporting decentralised decision making in racing teams.


How to avoid the ethical pitfalls of artificial intelligence and machine learning

Disconnects also exist between key functional stakeholders required to make sound holistic judgements around ethics in AI and ML. “There is a gap between the bit that is the data analytics AI, and the bit that is the making of the decision by an organisation. You can have really good technology and AI generating really good outputs that are then used really badly by humans, and as a result, this leads to really poor outcomes,” says Prof. Leonard. “So, you have to look not only at what the technology in the AI is doing, but how that is integrated into the making of the decision by an organisation.” This problem exists in many fields. One field in which it is particularly prevalent is digital advertising. Chief marketing officers, for example, determine marketing strategies that are dependent upon the use of advertising technology – which are in turn managed by a technology team. Separate to this is data privacy which is managed by a different team, and Prof. Leonard says each of these teams don’t speak the same language as each other in order to arrive at a strategically cohesive decision. 


Five types of thinking for a high performing data scientist

As data scientists, the first and foremost skill we need is to think in terms of models. In its most abstract form, a model is any physical, mathematical, or logical representation of an object, property, or process. Let’s say we want to build an aircraft engine that will lift heavy loads. Before we build the complete aircraft engine, we might build a miniature model to test the engine for a variety of properties (e.g., fuel consumption, power) under different conditions (e.g., headwind, impact with objects). Even before we build a miniature model, we might build a 3-D digital model that can predict what will happen to the miniature model built out of different materials. ... Data scientists often approach problems with cross-sectional data at a point in time to make predictions or inferences. Unfortunately, given the constantly changing context around most problems, very few things can be analyzed statically. Static thinking reinforces the ‘one-and-done’ approach to model building that is misleading at best and disastrous at its worst. Even simple recommendation engines and chatbots trained on historical data need to be updated on a regular basis. 


Double Trouble – the Threat of Double Extortion Ransomware

Over the past 12 months, double extortion attacks have become increasingly common as its ‘business model’ has proven effective. The data center giant Equinix was hit by the Netwalker ransomware. The threat actor behind that attack was also responsible for the attack against K-Electric, the largest power supplier in Pakistan, demanding $4.5 million in Bitcoin for decryption keys and to stop the release of stolen data. Other companies known to have suffered such attacks include the French system and software consultancy Sopra Steria; the Japanese game developer Capcom; the Italian liquor company Campari Group; the US military missile contractor Westech; the global aerospace and electronics engineering group ST Engineering; travel management giant CWT, who paid $4.5M in Bitcoin to the Ragnar Locker ransomware operators; business services giant Conduent; even soccer club Manchester United. Research shows that in Q3 2020, nearly half of all ransomware cases included the threat of releasing stolen data, and the average ransom payment was $233,817 – up 30% compared to Q2 2020. And that’s just the average ransom paid.


Evolution of code deployment tools at Mixpanel

Manual deploys worked surprisingly well while we were getting our services up and running. More and more features were added to mix to interact not just with k8s but also other GCP services. To avoid dealing with raw YAML files directly, we moved our k8s configuration management to Jsonnet. Jsonnet allowed us to add templates for commonly used paradigms and reuse them in different deployments. At the same time, we kept adding more k8s clusters. We added more geographically distributed clusters to run the servers handling incoming data to decrease latency perceived by our ingestion API clients. Around the end of 2018, we started evaluating a European Data Residency product. That required us to deploy another full copy of all our services in two zones in the European Union. We were now up to 12 separate clusters, and many of them ran the same code and had similar configurations. While manual deploys worked fine when we ran code in just two zones, it quickly became infeasible to keep 12 separate clusters in sync manually. Across all our teams, we run more than 100 separate services and deployments. 


When physics meets financial networks

Generally, physics and financial systems are not easily associated in people's minds. Yet, principles and techniques originating from physics can be very effective in describing the processes taking place on financial markets. Modeling financial systems as networks can greatly enhance our understanding of phenomena that are relevant not only to researchers in economics and other disciplines, but also to ordinary citizens, public agencies and governments. The theory of Complex Networks represents a powerful framework for studying how shocks propagate in financial systems, identifying early-warning signals of forthcoming crises, and reconstructing hidden linkages in interbank systems. ... Here is where network theory comes into play, by clarifying the interplay between the structure of the network, the heterogeneity of the individual characteristics of financial actors and the dynamics of risk propagation, in particular contagion, i.e. the domino effect by which the instability of some financial institutions can reverberate to other institutions to which they are connected. The associated risk is indeed "systemic", i.e. both produced and faced by the system as a whole, as in collective phenomena studied in physics.


What’s Driving the Surge in Ransomware Attacks?

The trend involves a complex blend of geopolitical and cybersecurity factors, but the underlying reasons for its recent explosion are simple. Ransomware attacks have gotten incredibly easy to execute, and payment methods are now much more friendly to criminals. Meanwhile, businesses are growing increasingly reliant on digital infrastructure and more willing to pay ransoms, thereby increasing the incentive to break in. As the New York Times notes, for years “criminals had to play psychological games to trick people into handing over bank passwords and have the technical know-how to siphon money out of secure personal accounts.” Now, young Russians with a criminal streak and a cash imbalance can simply buy the software and learn the basics on YouTube tutorials, or by getting help from syndicates like DarkSide — who even charge clients a fee to set them up to hack into businesses in exchange for a portion of the proceeds. The breach of the education publisher involving the false pedophile threat was a successful example of such a criminal exchange. Meanwhile, Bitcoin has made it much easier for cybercriminals to collect on their schemes.



Quote for the day:

"To make a decision, all you need is authority. To make a good decision, you also need knowledge, experience, and insight." -- Denise Moreland

Daily Tech Digest - June 12, 2021

Data Architecture: One Size Does Not Fit All

“There’s no one right way,” said George Yuhasz, VP and Head of Enterprise Data at NewRez, because the demands and the value that Data Architecture practices bring to an organization are as varied as the number of firms trying to get value from data. Yuhasz was speaking at DATAVERSITY® Data Architecture Online Conference. The very definition of Data Architecture varies as well, he says, so get clarity among stakeholders to understand the constraints and barriers in which Data Architecture needs to fit. Will the organization prioritize process alone? Or process, platforms, and infrastructure? Or will it be folded into a larger enterprise architecture? Without a clear definition, it’s impossible to determine key success criteria, or to know what success is, both in the short term and long term. The definition should be simple enough to be understood by a diverse group of stakeholders, and elegant enough to handle sophistication and nuance. Without it, he said, the tendency will be to “drop everything that even relates to the term ‘data’ onto your plate.”


Chrome zero-day, hot on the heels of Microsoft’s IE zero-day. Patch now!

This bug is listed as a “type confusion in V8“, where V8 is the part of Chrome that runs JavaScript code, and type confusion means that you can feed V8 one sort of data item but trick JavaScript into handling it as if it were something else, possibly bypassing security checks or running unauthorised code as a result. For example, if your code is doing JavaScript calculations on a data object that has a memory block of 16 bytes allocated to it, but you can trick the JavaScript interpreter into thinking that you are working on an object that uses 1024 bytes of memory, you can probably end up sneakily writing data outside the official 16-byte allocation, thus pulling off a buffer overflow attack. And, as you probably know, JavaScript security holes that can be triggered by JavaScript code embedded in a web page often result in RCE exploits, or remote code execution. That’s because you’re relying on your browser’s JavaScript engine to keep control over what is essentially unknown and untrusted programming downloaded and executed automatically from an external source. 


New quantum repeaters could enable a scalable quantum internet

If it can be built, a quantum internet would allow calculations to be distributed between multiple quantum computers – allowing larger and more complex problems to be solved. A quantum internet would also provide secure communications because eavesdropping on the exchange of quantum information can be easily identified. The backbone of such a quantum network would be quantum-mechanically entangled links between different network points, called nodes. However, creating entangled links over long distances at high data rates remains a challenge. A big problem is that quantum information becomes degraded as it is transmitted, and the rules of quantum mechanics do not allow signals to be amplified by conventional repeater nodes. The solution could be quantum repeaters, which can amplify quantum signals while still obeying quantum physics. Now, two independent research groups — one at the Institute of Photonic Sciences (ICFO) in Spain and the other at the University of Science and Technology of China (USTC) – have shown how quantum memories (QM) offer a path towards practical quantum repeaters.


3 Mindsets High-Performing Business Leaders Use to Create Growth

Success in life comes from understanding that a lot will happen to you outside of your control. As humans, we have emotions and feelings — they tend to take over when something happens to us that's outside of our control. When you focus on the things you can't control, you put yourself in a dark place that threatens to spiral your mind. High-performing entrepreneurs don't invest time, energy and emotion into situations that are outside of their control. Growth-focused business leaders make a deliberate effort to optimize their mind, body and spirit. They do the work to operate in a peak state and learn the techniques to get back into a peak state when they feel themselves slipping. ... Authentic business leadership means you create wealth through purposeful work and the desire to build a legacy. You need a vision for where you're going if you plan to get there and experience the benefits of entrepreneurship. Whether it's setting up a vision board or having your goals displayed on your phone's screensaver, you grow when you have a vision and implement growth strategies consistently. 


Has Serverless Jumped the Shark?

Today’s hyped technologies — such as functions-as-a-service offerings like Amazon Lambda, serverless frameworks for Kubernetes like Knative and other non-FaaS serverless solutions like database-as-a-service (DBaaS) — are the underpinnings of more advanced delivery systems for digital business. New methods of infrastructure delivery and consumption, such as cloud computing, are as much a cultural innovation as a technological one, most obviously in DevOps. Even with these technological innovations, companies will still consume a combination of legacy application data, modern cloud services and other serverless architectures to accomplish their digital transformation. The lynchpin isn’t a wholesale move to new technologies, but rather the ability to integrate these technologies in a way that eases the delivery of digital experiences and is invisible to the end-user. Serverless hasn’t jumped the shark. Rather, it is maturing. The Gartner Hype Cycle, a graphic representation of the maturity and adoption of technologies and applications, forecast in 2017 that it would take two to five years for serverless to move from a point of inflated expectations to a plateau of productivity.


Opinion: Andreessen Horowitz is dead wrong about cloud

In The Cost of Cloud, a Trillion-Dollar Paradox, Andreessen Horowitz Capital Management’s Sarah Wang and Martin Casado highlighted the case of Dropbox closing down its public cloud deployments and returning to the datacenter. Wang and Casado extrapolated the fact that Dropbox and other enterprises realized savings of 50% or more by bailing on some or all of their cloud deployments in the wider cloud-consuming ecosphere. Wang and Casado’s conclusion? Public cloud is more than doubling infrastructure costs for most enterprises relative to legacy data center environments. ... Well-architected and well-operated cloud deployments will be highly successful compared to datacenter deployments in most cases. However, “highly successful” may or may not mean less expensive. A singular comparison between the cost of cloud versus the cost of a datacenter shouldn’t be made as an isolated analysis. Instead, it’s important to analyze the differential ROI of one set of costs versus the alternative. While this is true for any expenditure, it’s doubly true for public cloud, since migration can have profound impacts on revenue. Indeed, the major benefits of the cloud are often related to revenue, not cost.


What Is Penetration Testing -Strategic Approaches & Its Types?

Social engineering acts as a crucial play in penetration testing. It is such a test that proves the Human Network of an organization. This test helps secure an attempt of a potential attack from within the organization by an employee looking to start a breach or an employee being cheated in sharing data. This kind of test has both remote penetration test and physical penetration test, which aims at most common social engineering tactics used by ethical hackers like phishing attacks, imposters, tailgating, pre-texting, gifts, dumpster diving, eavesdropping, to name a few. Mainly organizations need penetration testing professionals and need minimum knowledge about it to secure the organization from cyberattacks. They use different approaches to find the attacks and defend them. And they are five types of penetration testing: network, web application, client-side, wireless network, and social engineering penetration tests. One of the best ways to learn penetration testing certifications is EC-Council Certified Penetration Testing Professional or CPENT is one of the best courses to learn penetration testing. In working in flat networks, this course boosts your understanding by teaching how to pen test OT and IoT systems ...


Global chip shortage: How manufacturers can cope over the long term

Although it is very important to shift more production to the U.S., there are challenges in doing so, Asaduzzaman said. "Currently the number of semiconductor fabrication foundries in the U.S. is not adequate. If we help overseas-based companies to build factories here, that will be good. But we definitely don't want to send all our technology production overseas and then have no control. That will be a big mistake." However, one ray of hope is that policy provisions may encourage domestic production of semiconductors, Asaduzzaman said. "For instance, regulations could require U.S. companies that buy semiconductors to purchase a certain percentage from domestic producers. Industries have to use locally produced chips to make sure that local chip industries can sustain." Asaduzzaman called it "insulting and incorrect" that some overseas chip manufacturers believe the U.S. doesn't have the skills and cannot keep the cost low to compete with others in the industry. "We are the ones who invented the chip technology; now we are depending on overseas companies for chips," he said.


Complexity is the biggest threat to cloud success and security

Enterprises hit the “complexity wall” soon after deployment when they realize the cost and complexity of operating a complicated and widely distributed cloud solution outpaces its benefits. The number of moving parts quickly becomes too heterogeneous and thus too convoluted. It becomes obvious that organizations can’t keep the skills around to operate and maintain these platforms. Welcome to cloud complexity. Many in IT blame complexity on the new array of choices developers have when they build systems within multicloud deployments. However, enterprises need to empower innovative people to build better systems in order to build a better business. Innovation is just too compelling of an opportunity to give up. If you place limits on what technologies can be employed just to avoid operational complexity, odds are you’re not the best business you can be. Security becomes an issue as well. Security experts have long known that more vulnerabilities exist within a more complex technology solution (the more physically and logically distributed and heterogeneous). 


Limits to Blockchain Scalability vs. Holochain

It is worth noting that it would probably take years to get all of Twitter’s users to migrate over to Holochain to host themselves, even if Twitter switched their infrastructure to this kind of decentralized architecture. This is where the Holo hosting framework comes in. Holo enables Holochain apps, that would normally be self-hosted, to be served to a web user from a peer hosting network. In other words, if your users just expect to open their browser, type in a web address, and have access to your app, you may need to provide them with a hosted version. Holo has a currency which pays hosts for the hosting power they provide for Holochain apps that still need to serve mainstream web users. So instead of paying Amazon or Google to host your app, you pay a network of peer hosting providers in the HoloFuel cryptocurrency. Instead of gas fees costing over 1 billion times what it would cost to host on Amazon (as it does on Ethereum), we expect Holo hosting to have more competitive pricing to current cloud providers because of the low demand on system resources as outlined above.



Quote for the day:

“Doing the best at this moment puts you in the best place for the next moment” -- Oprah Winfrey

Daily Tech Digest - June 11, 2021

Why API Quality Is Top Priority for Developers

Processes such as chaos engineering, load testing and manual quality assurance can uncover situations where an API is failing to handle unexpected situations. Deploying your API to a cloud provider with a compelling SLA instead of your own hardware and network shifts the burden of infrastructure resiliency to a service, freeing your time to build features for your customers. A comprehensive suite of automated tests isn’t always sufficient to provide a robust API. Edge cases, unexpected code branches and other unplanned behavior may be triggered by requests that were not considered when writing the test suite. Traditional automated tests should be complimented by fuzz testing to help uncover hidden execution paths. ... It is expected that most APIs are built on layers of open source libraries and frameworks. Software composition analysis is a necessity to stay on top of zero-day vulnerabilities by identifying vulnerable dependencies as soon as they are discovered. OWASP guidance is a must-have—directing API developers to implement attack mitigation strategies such as CORS and CSRF protection. Application logic must be well tested for authorization and authentication.


New Ransomware Group Claiming Connection to REvil Gang Surfaces

Like many established ransomware operators, the gang behind Prometheus has adopted a very professional approach to dealing with its victims — including referring to them as "customers," PAN said. Members of the group communicate with victims via a customer service ticketing system that includes warnings on approaching payment deadlines and notifications of plans to sell stolen data via auction if the deadline is not met. "New ransomware gangs like Prometheus follow the same TTPs as big players [such as] Maze, Ryuk, and NetWalker because it is usually effective when applied the right way with the right victim," Santos says. "However, we do find it interesting that this group sells the data if no ransom is paid and are very vocal about it." From samples provided by the Prometheus ransomware gang on their leak site, the group appears to be selling stolen databases, emails, invoices, and documents that include personally identifiable information. "There are marketplaces where threat actors can sell leaked data for a profit, but we currently don't have any insight on how much this information could be sold in a marketplace," Santos says


Google is using AI to design its next generation of AI chips more quickly than humans can

Google’s engineers trained a reinforcement learning algorithm on a dataset of 10,000 chip floor plans of varying quality, some of which had been randomly generated. Each design was tagged with a specific “reward” function based on its success across different metrics like the length of wire required and power usage. The algorithm then used this data to distinguish between good and bad floor plans and generate its own designs in turn. As we’ve seen when AI systems take on humans at board games, machines don’t necessarily think like humans and often arrive at unexpected solutions to familiar problems. When DeepMind’s AlphaGo played human champion Lee Sedol at Go, this dynamic led to the infamous “move 37” — a seemingly illogical piece placement by the AI that nevertheless led to victory. Nothing quite so dramatic happened with Google’s chip-designing algorithm, but its floor plans nevertheless look quite different to those created by a human. Instead of neat rows of components laid out on the die, sub-systems look like they’ve almost been scattered across the silicon at random.


More and More Professionals Are Using Their Career Expertise to Launch Entrepreneurial Ventures

The first step is to immerse yourself within your training and specialty and have the confidence to be a key thought leader in the space. Do the extra research, spend the time to learn all of the new information and data in your field to truly understand the opportunity within. "I have been fortunate to be involved with several top academic institutions during my training. While the training was fantastic, there were areas that I felt could be improved for the ultimate outcome of increased access to high-quality healthcare," says Dr. Bajaj. "Thankfully, this vision has resulted in great outcomes and happy patients." ... "Ready. Fire. Aim!" as Dr. Bajaj puts it, "Time was not waiting for me to be fully prepared. Sometimes you have to take the leap." In entrepreneurship, there are no guarantees, which is quite different from some of the career paths that we have trained for our entire academic life. Guaranteed salary, retirement plans, and annual bonuses are far from promised in your own business, and it is important to adapt accordingly. Everything will not go according to plan, and it is important to find comfort with that. As long the launchpad for growth has been established - patience is the biggest challenge, not security.


CISOs: It's time to get back to security basics

The goal of cybersecurity used to be protecting data and people's privacy, Summit said. There has been a major shift in that thinking. "It's one thing to lose a patient's data, which is extremely important to protect, but when you start interrupting" people's ability to travel or the food supply chain, "you have a whole different level of problems … It's not just about protecting data but your operations. That's where major changes are starting to occur." Summit added that he has long said if companies were making cybersecurity a high priority long before now, "we wouldn't be in this position" and facing government scrutiny. The cybersecurity field is "incredibly dynamic," Hatter said, and CISOs don't have the luxury of planning out three to five years. "We want to create and deploy a strategy that's sound and solid. But market forces demand; we recalibrate what we do and COVID-19 was a great example of that." CISOs now have to have as resilient a strategy as possible but be prepared to make changes. Managed security service providers can help, Summit said, but CISOs are still feeling overwhelmed.


New quantum entanglement verification method cuts through the noise

Virtually any interaction with their environment can cause them to collapse like a house of cards and lose their quantum correlations – a process called decoherence. If this happens before an algorithm finishes running, the result is a mess, not an answer. (You would not get much work done on a laptop that had to restart every second.) In general, the more qubits a quantum computer has, the harder they are to keep quantum; even today’s most advanced quantum processors still have fewer than 100 physical qubits. The solution to imperfect physical qubits is quantum error correction (QEC). By entangling many qubits together in a so-called “genuine multipartite entangled” (GME) state, where every qubit is entangled with every other qubit in that bunch, it is possible to create a composite “logical” qubit. This logical qubit acts as an ideal qubit: the redundancy of the shared information means if one of the physical qubits decoheres, the information can be recovered from the rest of the logical qubit. Developing quantum error-correcting systems requires verifying that the GME states used in logical qubits are present and working as intended, ideally as quickly and efficiently as possible.


DeepMind says reinforcement learning is ‘enough’ to reach general AI

Some scientists believe that assembling multiple narrow AI modules will produce higher intelligent systems. For example, you can have a software system that coordinates between separate computer vision, voice processing, NLP, and motor control modules to solve complicated problems that require a multitude of skills. A different approach to creating AI, proposed by the DeepMind researchers, is to recreate the simple yet effective rule that has given rise to natural intelligence. “[We] consider an alternative hypothesis: that the generic objective of maximising reward is enough to drive behaviour that exhibits most if not all abilities that are studied in natural and artificial intelligence,” the researchers write. This is basically how nature works. As far as science is concerned, there has been no top-down intelligent design in the complex organisms that we see around us. Billions of years of natural selection and random variation have filtered lifeforms for their fitness to survive and reproduce. Living beings that were better equipped to handle the challenges and situations in their environments managed to survive and reproduce. The rest were eliminated.


Evaluation of Cloud Native Message Queues

The significant rise in internet-connected devices will consequently have a substantial influence on systems’ network traffic, and current point-to-point technologies using synchronous communication between end-points in IoT-systems are not any longer a sustainable solution. Message queue architectures using the publish-subscribe paradigm are widely implemented in event-based systems. This paradigm uses asynchronous communication between entities and conforms to scalable, high throughput, and low latency systems that are well adapted within the IoT-domain. This thesis evaluates the adaptability of three popular message queue systems in Kubernetes. The systems are designed differently, where e.g. the Kafka system is using a peer-to-peer architecture while STAN and RabbitMQ use a master-slave architecture by applying the Raft consensus algorithm. A thorough analysis of the systems’ capabilities in terms of scalability, performance, and overhead are presented. The conducted tests give further knowledge on how the performance of the Kafka system is affected in multi-broker clusters using multiple number of partitions, enabling higher levels of parallelism for the system. 


Mysterious Custom Malware Collects Billions of Stolen Data Points

Researchers have uncovered a 1.2-terabyte database of stolen data, lifted from 3.2 million Windows-based computers over the course of two years by an unknown, custom malware. The heisted info includes 6.6 million files and 26 million credentials, and 2 billion web login cookies – with 400 million of the latter still valid at the time of the database’s discovery. According to researchers at NordLocker, the culprit is a stealthy, unnamed malware that spread via trojanized Adobe Photoshop versions, pirated games and Windows cracking tools, between 2018 and 2020. It’s unlikely that the operators had any depth of skill to pull off their data-harvesting campaign, they added. “The truth is, anyone can get their hands on custom malware. It’s cheap, customizable, and can be found all over the web,” the firm said in a Wednesday posting. “Dark Web ads for these viruses uncover even more truth about this market. For instance, anyone can get their own custom malware and even lessons on how to use the stolen data for as little as $100. And custom does mean custom – advertisers promise that they can build a virus to attack virtually any app the buyer needs.”


Get your technology infrastructure ready for the Age of Uncertainty

As I say, it’s by no means clear what happens next and how ingrained changes will be. It’s plausible, of course, that we largely go back to old habits although that seems unlikely with a groundswell of employees having become accustomed to a different way of life and a different way of working. And it’s worth noting that even evidence of a return to ancient ways of living in the form of crafts, baking and so on are now very much digitally imbued activities. We download apps, consult websites and share ideas on forums when we try out a new recipe, and this sort of binary activity is part of the fabric of life because it is faster, more convenient and more scalable than the older alternatives. But what we need to do is strike the perfect balance between technology-enabled agility and what we want to do with our time. What we will need to manage through change is clear though. Adaptivity, enabled by robust, data-centric digital business designs, will become the watchword of operations. In other words, companies will need to be able to move fast, whatever happens, changing operating models, moving into adjacent markets and generally taking nothing for granted. In the new Age of Uncertainty, legacy systems have to be reassessed in the context of how best to build for agility.



Quote for the day:

"To have long term success as a coach or in any position of leadership, you have to be obsessed in some way." -- Pat Riley

Daily Tech Digest - June 10, 2021

Tracing: Why Logs Aren’t Enough to Debug Your Microservices

Traces complement logs. While logs provide information about what happened inside the service, distributed tracing tells you what happened between services/components and their relationships. This is extremely important for microservices, where many issues are caused due to the failed integration between components. Also, logs are a manual developer tool and can be used for any level of activity – a specific low-level detail, or a high-level action. This is also why there are many logging best practices available for developers to learn from. On the other hand, traces are generated automatically, providing the most complete understanding of the architecture. Distributed tracing is tracing that is adapted to a microservices architecture. Distributed tracing is designed to enable request tracking across autonomous services and modules, providing observability into cloud native systems. ... Distributed tracing provides observability and a clear picture of the services. This improves productivity because it enables developers to spend less time trying to locate errors and debugging them, as the answers are more clearly presented to them.


How SAML 2.0 Authentication Works and Why It Matters

At its core, Security Assertion Markup Language (SAML) 2.0 is a means to exchange authorization and authentication information between services. SAML is frequently used to implement internal corporate single sign-on (SSO) solutions where the user logs into a service that acts as the single source of identity which then grants access to a subset of other internal services. ... Generally, SAML authentication solves three important problems: SAML offers a significant improvement to user experience. Users only have to remember their credentials with a single identity provider and not having to worry about usernames and passwords for every application they use; SAML allows application developers to outsource identity management and authentication implementation to external providers without implementing it themselves; and And perhaps most importantly, SAML dramatically reduces the operational overhead of managing access within an organization. If an employee leaves or transfers to another team, their access will be automatically revoked or downgraded across all applications connected to the identity provider.


How to build Data Science capabilities within an organization

Signing up for a data science program is half the battle won. But only a strong, steady commitment and effort will take it to completion and yield amazing results. You as an organization may be clear on the ‘why’ of the whole endeavor. You know that more self-sufficiency and expertise will bring in more revenue. But without communicating the benefits learning data science has for your employees, you are unlikely to see genuine involvement. You can encourage buy-in from employees by showcasing the future career path, rewards of upskilling, higher payouts for working on advanced projects, or even the fear of being left out( I hate to say this but this is how the cookie crumbles). Of course, the seniority in your organization needs to weigh the pros & cons of such a transformation and accordingly roll out the mandate to selected groups as there may be employees who may not be sold to the idea of building the skills required for data science at all. ... A great deal of time, energy, and effort is saved by a wide variety of platforms that provide a bunch of tools and services for data science monitoring. They track and test the employee's progress during the data science program. This can keep your employees on their toes. 


New identities are creating opportunities for attackers across the enterprise

The adoption of cloud services, third parties, and remote access has dissolved the traditional network perimeter and made security a far more complex equation than before. Identity security is quickly emerging become the primary line of defence for most organisations, because it allows security teams to tailor each user’s access proportionately based on the needs of their job role. Underpinning this model is Zero-Trust – the practice of treating all accounts with the same minimal level of access until authenticated. In cloud environments, for example, any human or machine identity can be configured with thousands of permissions to access cloud workloads containing critical information. User, group, and role identities are typically assigned permissions depending on their job functions. Providing each identity with its own unique permissions allows users to access what they need, when they need it, without putting company assets at risk of breach. In combination with Zero-Trust, it ensures each identity is only able to gain that access once it is authenticated. The increasing recognition of Zero-Trust as security best practice has led its stock to rise significantly, so much so that 88% of those we researched categorised it as either ‘important’ or ‘very important’ in tackling today’s advanced threats. 


What to Know About Updates to the PCI Secure Software Standard

The PCI Council made several clarifications to controls within the standard, added additional guidance to a couple of sections, and added its new module specific to Terminal Software Requirements, which applies to software intended for deployment and execution on payment terminals. Specific to the new module of the Secure Software Standard, Module B, Terminal Software Requirements focus on software intended for deployment and execution on payment terminals or PCI-approved PIN Transaction Security (PTS) point-of-interaction (POI) devices. In total, the new section adds 50 controls covering five control objectives. ... Similar to Terminal Software Attack Mitigation, Terminal Software Security Testing clearly calls out the need to ensure software is "rigorously" tested for vulnerabilities prior to each release. The software developer is expected to have a documented process that is followed to test software for vulnerabilities prior to every update or release. The control tests in this objective continue to highlight secure software development best practices – testing for unnecessary ports or protocols, identifying unsecure transmissions of account data, identification of default credentials, hard-coded authentication credentials, test accounts or data, and/or ineffective software security controls.


Reawakening Agile with OKRs?

The approach I found works best is to lead with OKRs - what the team want to do. So throw your backlog away and adopt a just-in-time requirements approach. Stop seeing "more work than we have money and time to do" as a sign of failure and see it is a sign of success. Every time you need to plan work return to your OKRs and ask: What can we do now, in the time we have, to move closer to our OKRs? Stop worrying about burning-down the backlog and put purpose first, remember why the team exists, ask Right here, right now, how can we add value? Used in a traditional MBO-style one might expect top managers to set OKRs which then cascade down the company with each team being given their own small part to undertake. That would require a Gosplan style planning department and would rob teams of autonomy and real decision making power. (Gosplan was the agency responsible for creating 5-year plans in the USSR, and everyone knows how that story ended.) Instead, leaders should lead. Leaders should stick to big things. They should emphasise the purpose and mission of the organization, they should set large goals for the whole organization but those goals should not be specific for teams.


Intel Plugs 29 Holes in CPUs, Bluetooth, Security

Several of the 29 vulnerabilities are rated as high-severity – including four local privilege escalation vulnerabilities in firmware for Intel’s CPU products; another local privilege escalation vulnerability in Intel Virtualization Technology for Directed I/O (VT-d); a network-exploitable privilege escalation vulnerability in the Intel Security Library; another locally exploitable privilege escalation in the NUC family of computers; yet more in its Driver and Support Assistant (DSA) software and RealSense ID platform; and a denial-of-service (DoS) vulnerability in selected Thunderbolt controllers. ... “Interestingly, it’s in the firmware that controls the CPUs, not in the host operating system,” he continued. “We’re used to automatically applying updates for operating systems and software products – and even then we still occasionally see updates that result in the dreaded blue screen of death.” Applying firmware updates is not as well-managed as software updates, he noted, likely because they’re tougher to test … which means they pack more inherent risk.


How to Deploy Emotional Intelligence for Work Success

“In simplest terms, empathy is putting yourself in someone else’s shoes,” writes Denna Ritchie in a Calendar article. Possessing this is arguably the most important leadership skill. After all, being empathetic is the foundation when building and fortifying social connections. What’s more, it can create a more loyal, engaged, and productive team. As if that weren’t enough, empathy increases happiness, teaches presence and fosters innovation collaboration. ... Speaking of vulnerability, psychologist Nick Wignall defines it as “the willingness to acknowledge your emotions -- especially painful ones.” He clarifies “that when we talk about vulnerability, we’re usually referring to emotional vulnerability. When your best friend suggests that you should work on being more vulnerable in your relationship, they’re probably not talking about making yourself more physically vulnerable.” In short, vulnerability is all about emotions. In particular, difficult emotions like anxiety, frustration, and shame. The other part of the equation is acknowledging these negative emotions and knowing how to address them.


Becoming a Self-Taught Cybersecurity Pro

If you are looking to take your IT career in a new direction where there's loads of demand, there are several interesting subspecialities, and the pay continues to increase, a career in cybersecurity can't be beat right now. It's impossible to ignore all the high-profile attacks -- from the SolarWinds supply chain attack impacting multiple government agencies, to the more recent spate ransomware attacks against gas pipeline company Colonial Pipeline and meat producer JBS, to name a few. The move to work from home and to accelerate digital transformations has only increased the alert level and the demand for cybersecurity pros. "In cybersecurity right now there's a significant shortage of candidates," said Ariel Weintrab, chief information security officer at Mass Mutual. Her cybersecurity team is hiring from general IT pros and also "recruiting from a wide variety of educational backgrounds," not just technology. Her organization is looking for problem solvers with intellectual creativity. But if you just show up at the hiring office with your liberal arts degree or your cybersecurity certification, how do you stand out from the crowd of other applicants interested in cybersecurity? 


The 6 steps to implementing zero trust

When looking for short term wins in pursuit of a long-term goal, businesses should look to target a single or a collection of applications that would most benefit from adopting a zero trust security model – critical applications that key decision makers are more aware of, which will help demonstrate the return on investment (ROI) along the way. Companies also need to understand that this is a learning process, and thus need to be comfortable in adapting their approach as they learn more about what they are trying to protect. Adopting zero trust means businesses will be re-positioning the usual access models, and this may require solicitation and education of stakeholders. Part of the process however is understanding these dependencies and catering for them in the program. ... The overall aim for businesses is to make quick and measurable progression, so choosing to address a number of areas would be counterproductive. Just like how a business would take a very focused approach when identifying what applications to protect at this stage, they should apply a similar attitude when determining how to approach zero trust itself.



Quote for the day:

"The test we must set for ourselves is not to march alone but to march in such a way that others will wish to join us." -- Hubert Humphrey

Daily Tech Digest - June 09, 2021

Avoiding ransomware: what security & risk leaders need to know

First, organisations need to determine the OEM provider’s approach to secure product management, from ideation to end of life. Determining this from the onset will help CIOs understand the core competencies of a product security officer, enabling them to cultivate the skills that are needed to productise security features, including product roadmap, planning and lifecycle management. Second, a focus on an integrated digital security approach, which looks holistically across IT and data, product, and operations-related technology, is needed. Currently, too many companies fail to see convergence, leaving key features at risk of being hacked – easily. Companies must look at their supplier risk. Supplier risk has, traditionally, focused on the data and IT infrastructure security of the supply chain, usually missing crucial elements, like product security, which needs to be factored in for a better securitisation. More importantly, some supply chain leaders are still using old vendor risk policies with OEMs that have increasingly become more digital, compromising the security of new products and devices – and once again leaving the window ajar for hackers to jump in.


For CISOs and artificial intelligence to evolve, trust is a must

With concerns rising from consumers and citizens and the increasing need for more ethics and trust, we need to put limits to ensure sound and fair use of AI technologies. The new EU Artificial Intelligence Act is beneficial because it will dictate the rules and force companies to examine the societal implications of rapid technology adoption. We must find a balance between technology benefits and risks. With the emergence of AI-enabled applications, traditional surveillance is transforming into smart video with new use cases that transcend what we consider surveillance today. Unfortunately, under the pretext of protection, camera operators risk exposing everyone within sight. We tend to overlook what data is collected or if it is secure for the greater good. Any technology use and innovation must be transparent and explainable. In 2020, amidst the COVID-19 disruption, France launched its contact tracing application, but its adoption was incredibly low because most citizens questioned the technology used and how the data was collected and stored. It forced the French government to rethink its approach and launch a new, “enriched” version of the application.


The Creepy Side Of Emotion Recognition Technology

AI experts say emotion recognition systems are based on the assumption that humans manifest emotions in similar ways. Something as simple as a raised eyebrow may have different meanings in different cultures. Luke Stark, assistant professor in the faculty of information and media studies at the University of Western Ontario, said in an interview, “Emotions are simultaneously made up of physiological, mental, psychological, cultural, and individually subjective phenomenological components. No single measurable element of an emotional response is ever going to tell you the whole story. Philosopher Jesse Prinz calls this “the problem of parts.” In a recent essay for Nature, Professor Kate Crawford said many such algorithms are based on psychologist Paul Ekman’s study, conducted in 1960s, on nonverbal behaviour. According to him, there are six basic emotions– happiness, sadness, fear, anger, surprise and disgust. Ekman’s work and ideas have formed the basis for emotion-detection technologies used by giants such as Microsoft, IBM, and Amazon.


Three Critical Success Factors for Master Data Management

The biggest danger to a nascent MDM program is starting with the wrong objectives, even though those objectives can often sound quite right. The best practice here is to start with discrete and measurable business outcomes. A key acid test in this scenario is the ability to describe the outcomes of MDM in nontechnical terms that the business can understand and champion, both before and after they are delivered. If you can’t do this, then you likely have the wrong objective! ... My experience has shown that the vast majority of enterprises stumble at this point, but it’s a great method to get IT teams to see the issue that they are eventually going to have in maintaining momentum over the life of the MDM program. It is also helpful to consider business outcomes as divided along two axes as shown below: those that make money vs. those that save money, and current but sub-optimal vs. net new business processes. While most IT teams are capable of solving those use cases in the lower left quadrant on their own, true digital transformation resides in the upper right quadrant, and requires full participation from the business in identifying, describing, and quantifying these outcomes.


CSPM explained: Filling the gaps in cloud security

The issue for all cloud-based technologies is that they inherently lack a perimeter. This means that while you can have some protection, no simple method can determine which processes or persons are supposed to have access and keep out those who don’t have access rights. You need a combination of protective measures to ensure this. The other challenge is that manual processes can’t keep up with scaling, containers, and APIs. This is the whole point why what is now called infrastructure as code has caught on, in which infrastructure is managed and provisioned by machine-readable definition files. These files depend on an API-driven approach. This approach is integral to cloud-first environments because it makes it easy to change the infrastructure on the fly, but also makes it easy to create misconfigurations that leave the environment open to vulnerabilities. Speaking of containers, it is also hard to track them across the numerous cloud offerings that are available. Amazon Web services (AWS) alone has its Elastic Container Service, its serverless compute engine Fargate ...


Tough regulations are coming for the cryptocurrency sector

The cryptocurrency sector needs an international framework that regulates it. This could be introduced to restrict its usage in all countries. At the moment, countries have a disjointed approach to regulating this sector – if they are even regulating it at all. Some countries such as Japan passed regulations in favor of cryptocurrencies, recognizing them as legal property, and the sector is under the entire supervision of the Financial Services Agency. Other countries like India are looking to ban this sector; in March 2021, the Indian government was due to introduce a digital currency bill that would have made cryptocurrencies illegal in the country. China is furthering its restrictions by prohibiting financial institutions from engaging in related transactions. The decision to restrict or ban the use of cryptocurrencies by countries is an attempt to limit the influence that the sector can have on the world economy, as they wouldn’t want to surrender the control of their economy to a decentralized currency. In the UK, the Bank of England released a discussion paper in which it explains that stablecoins should expected the same regulations as fiat currencies, in this report it also mentions it is exploring the potential introduction of its own digital currency, the “Britcoin”.


What Makes Quantum Computing So Hard to Explain?

Let’s start with quantum mechanics. (What could be deeper?) The concept of superposition is infamously hard to render in everyday words. So, not surprisingly, many writers opt for an easy way out: They say that superposition means “both at once,” so that a quantum bit, or qubit, is just a bit that can be “both 0 and 1 at the same time,” while a classical bit can be only one or the other. They go on to say that a quantum computer would achieve its speed by using qubits to try all possible solutions in superposition — that is, at the same time, or in parallel. This is what I’ve come to think of as the fundamental misstep of quantum computing popularization, the one that leads to all the rest. From here it’s just a short hop to quantum computers quickly solving something like the traveling salesperson problem by trying all possible answers at once — something almost all experts believe they won’t be able to do. The thing is, for a computer to be useful, at some point you need to look at it and read an output. But if you look at an equal superposition of all possible answers, the rules of quantum mechanics say you’ll just see and read a random answer. And if that’s all you wanted, you could’ve picked one yourself.


The Future Of Crypto And Blockchain: Fintech 50 2021

One notable graduate of the list is Coinbase, the largest cryptocurrency exchange in the United States, which shook the industry and public markets with its April 14 Nasdaq debut – the largest direct listing in history. At one point during the opening day, Coinbase’s market cap exceeded $100 billion, setting a high bar for crypto startups still eyeing a public offering. Two of this year’s members, Kraken and Gemini (also cryptocurrency exchanges), have discussed going public in the future. But fintech is no longer just a tale of corporate success. Cryptocurrency lenders and exchanges are slowly giving way to the new hot shot of the class – decentralized finance (DeFi). An umbrella term for blockchain-based applications and protocols aiming to replace traditional financial intermediaries like banks and brokerages, DeFi skyrocketed in popularity and market capitalization over the past 12 months – from just over $1 billion in locked value in June 2020 to the current $67.9 billion. The largest among DeFi platforms are lending and borrowing protocols, such as Aave and MakerDAO, and decentralized exchanges like Uniswap and SushiSwap – all built on Ethereum.


Google Hopes AI Can Turn Search Into a Conversation

In the “Rethinking Search” paper, the Google researchers call indexing the workhorse of modern search. But they envision doing away with indexing by using ever-larger language models that can understand more queries. The Knowledge Graph, for example, may serve up answers to factual questions, but it’s trained on only a small portion of the web. Using a language model built from more of the web would allow a search engine to make recommendations, retrieve documents, answer questions, and accomplish a wide range of tasks. The authors of the Rethinking Search paper say the approach has the potential to create a “transformational shift in thinking.” Such a model doesn’t exist. In fact the authors say it may require the creation of artificial general intelligence or advances in fields like information retrieval and machine learning. Among other things, they want the new approach to supply authoritative answers from a diversity of perspectives, clearly reveal its sources, and operate without bias. A Google spokesperson described LaMDA and MUM as part of Google’s research into next-generation language models and said internal pilots are underway for MUM to help people with queries on billions of topics.


Building Reliable Software Systems with Chaos Engineering

Complex systems are inevitable. That’s the short answer, but we can expand on that a bit. As humans, we deal with complexity every day but the way we deal with it is to make mental models or abstractions about the complexity. In everyday life we deal with other complex systems such as automobile traffic, interaction with other people and animals, or even at a societal level. Decades of IT work has focused on making system models simple (e.g. the three tier web app) and that works great, when it is possible. For better or worse, the situations where that is possible are diminishing. We are entering a world where most and eventually nearly all software systems will be complex. What do we mean by “complex?” In this case we mean that a system is complex if it is too large, and has too many moving parts, for any single human to mentally model the system with predictive accuracy. Twenty years ago, I could write a content management system and basically understand all of the working parts. I could tell you, roughly, what a change to the performance of a query would have on the overall performance of the rest of the application, without having to actually try it. That is no longer the case.



Quote for the day:

"Leadership without character is unthinkable - or should be." -- Warren Bennis

Daily Tech Digest - June 08, 2021

DeepMind scientists: Reinforcement learning is enough for general AI

A different approach to creating AI, proposed by the DeepMind researchers, is to recreate the simple yet effective rule that has given rise to natural intelligence. “[We] consider an alternative hypothesis: that the generic objective of maximising reward is enough to drive behaviour that exhibits most if not all abilities that are studied in natural and artificial intelligence,” the researchers write. This is basically how nature works. As far as science is concerned, there has been no top-down intelligent design in the complex organisms that we see around us. Billions of years of natural selection and random variation have filtered lifeforms for their fitness to survive and reproduce. Living beings that were better equipped to handle the challenges and situations in their environments managed to survive and reproduce. The rest were eliminated. This simple yet efficient mechanism has led to the evolution of living beings with all kinds of skills and abilities to perceive, navigate, modify their environments, and communicate among themselves. 


How To Become A Machine Learning Engineer?

New roles such as machine learning architecture are being created today. As the platform gets bigger, the machine learning engineer should handle the entire architecture and evolve to meet the needs of the data science, machine learning and data analytics organisations. The most important aspect of an ML engineer is the focus on production and model deployment — not just code that works, but code that functions in the real world, alongside understanding industry best practices to successfully integrate and deploy machine learning models. For starters, having a computer science, robotics, engineering and physics degree, along with competencies in C, C++, Java, Python, R, Scala, Julia, and other enterprise languages, helps. Plus, a stronger understanding of databases adds weightage. At experience levels, software engineers, software developers, and software architects are cut out for machine learning engineering roles. “It is almost a straight line from cloud architect to ML engineer and ML architect, as these two roles have so much overlap. If you understand data science and machine learning, you can understand models,” said Vashishta.


.NET Ranks High in Coding Bootcamp Report

Microsoft's .NET development framework ranked high in recent research about coding bootcamps, or "immersive technology education." With an average cost of about $14,000, these accelerated learning programs can last from six to 28 weeks -- averaging about 14 -- and promise to advance careers in both technical chops and bottom-line salary increases. Course Report studies the industry and presents its findings in annual reports that can help coders pick the best option, among some 500 around the world, with the choice of programming language being a primary factor. "Coding bootcamps employ teaching languages to introduce students to the world of programming," Course Report said in its latest study: Coding Bootcamps in 2021, an update of a 2020 report. "While language shouldn't be the main deciding factor when choosing a bootcamp, students may have specific career goals that guide them towards a particular language. In that case, first decide whether you'd prefer to learn web or mobile development. For the web, your main choices are Ruby, Python, LAMP stack, MEAN stack and .NET languages.


Amazon Sidewalk starts sharing your WiFi tomorrow, thanks

Amazon Sidewalk will create a mesh network between smart devices that are located near one another in a neighborhood. Through the network, if, for instance, a home WiFi network shuts down, the Amazon smart devices connected to that home network will still be able to function, as they will be borrowing internet connectivity from neighboring products. Data transfer between homes will be capped, and the data communicated through Amazon Sidewalk will be encrypted. Amazon smart device owners will automatically be enrolled into Amazon Sidewalk, but they can opt out before a June 8 deadline. That deadline has irked many cybersecurity and digital rights experts, as Amazon Sidewalk itself was not unveiled until June 1—just one week before a mass rollout. Jon Callas, director of technology projects at Electronic Frontier Foundation, told the news outlet ThreatPost that he did not even know about Amazon’s white paper on the privacy and security protocols of Sidewalk until a reporter emailed him about it. “They dropped this on us,” Callas said in speaking to ThreatPost. “They gave us seven days to opt out.”


Researchers Discover a Molecule Critical to Functional Brain Rejuvenation

Recent studies suggest that new brain cells are being formed every day in response to injury, physical exercise, and mental stimulation. Glial cells, and in particular the ones called oligodendrocyte progenitors, are highly responsive to external signals and injuries. They can detect changes in the nervous system and form new myelin, which wraps around nerves and provides metabolic support and accurate transmission of electrical signals. As we age, however, less myelin is formed in response to external signals, and this progressive decline has been linked to the age-related cognitive and motor deficits detected in older people in the general population. Impaired myelin formation also has been reported in older individuals with neurodegenerative diseases such as Multiple Sclerosis or Alzheimer’s and identified as one of the causes of their progressive clinical deterioration. ... The discovery also could have important implications for molecular rejuvenation of aging brains in healthy individuals, said the researchers. Future studies aimed at increasing TET1 levels in older mice are underway to define whether the molecule could rescue new myelin formation and favor proper neuro-glial communication.


Fujifilm refuses to pay ransomware demand, restores network from backups

Jake Moore, cybersecurity specialist at internet security firm ESET, said refusing to pay a ransom is “not a decision to be taken lightly.” Ransomware gangs often threaten to leak or sell sensitive data if payment is not made. However, Fujifilm Europe said it is “highly confident that no loss, destruction, alteration, unauthorised use or disclosure of our data, or our customers’ data, on Fujifilm Europe’s systems has been detected.” The spokesperson added: “From a European perspective, we have determined that there is no related risk to our network, servers and equipment in the EMEA region or that of our customers across EMEA. We presently have no indication that any of our regional systems have been compromised, including those involving customer data.” It is not clear if the ransomware gang stole Fujifilm data from the affected network in Japan. Fujifilm declined to comment when asked if those responsible had threatened to publish data if the ransom is not paid. According to security news site Bleeping Computer, Fujifilm was infected with the Qbot trojan last month.


Fixing Risk Sharing With Observability

The challenge is that one party, the developers, has more information than other parties. That information asymmetry is what creates unbalanced risk sharing. Coping with information asymmetry has led to all kinds of new collaborative models, starting with DevOps and evolving into DevSecOps and other permutations like BizDevSecOps. True collaboration has been hard to come by. Early DevOps efforts are often successful, but scaling beyond five to seven teams is difficult because teams lack the breadth of experience in IT operations or the SRE capacity to staff multiple product teams. The change velocity DevOps teams can achieve is often far greater than SREs and SecOps can absorb, making information asymmetry worse. If teams can’t maintain high levels of collaboration and communication, another option must be developed. Observability practices, like collecting all events, metrics, traces and logs, allow SREs and SecOps teams to interrogate applications about their behavior without knowing which questions they want to ask ahead of time. However, observability only works if applications, and the infrastructure they rely on, are instrumented.


How to Structure a Digital Transformation Project Team

The first role on a project team and arguably the most important role on a product team is your executive steering committee. This is typically a cross-functional group of executives within your organization that are responsible for setting the vision for the overall transformation. They’re responsible for approving scope changes, or any sort of material changes to the project plan or the budget. They’re ultimately responsible for setting the tone and the vision for the overall future state. The question to ponder on here is, how do we want our operation model to look and what do we want our organization to look like in the future? For lack of a better word, what do we want to be when we grow up? Before I get into the rest of the project team, something that’s very important even before we talk about other team roles is who should fill these other roles. The first thing you want to do is make sure that the steering committee is aligned on the overall transformation, vision, strategy, and objectives. If you start filling out the project team prematurely, when you don’t have that alignment 


Windows Container Malware Targets Kubernetes Clusters

After it compromises web servers, Siloscape uses container escape tactics to achieve code execution on the Kubernetes node. Prizmant said that Siloscape’s heavy use of obfuscation made it a chore to reverse-engineer. “There are almost no readable strings in the entire binary. While the obfuscation logic itself isn’t complicated, it made reversing this binary frustrating,” he explained. The malware obfuscates functions and module names – including simple APIs – and only deobfuscates them at runtime. Instead of just calling the functions, Siloscape “made the effort to use the Native API (NTAPI) version of the same function,” he said. “The end result is malware that is very difficult to detect with static analysis tools and frustrating to reverse engineer.” “Siloscape is being compiled uniquely for each new attack, using a unique pair of keys,” Prizmant continued. “The hardcoded key makes each binary a little bit different than the rest, which explains why I couldn’t find its hash anywhere. It also makes it impossible to detect Siloscape by hash alone.”


Machine learning at the edge: TinyML is getting big

Whether it's stand-alone IoT sensors, devices of all kinds, drones, or autonomous vehicles, there's one thing in common. Increasingly, data generated at the edge are used to feed applications powered by machine learning models. There's just one problem: machine learning models were never designed to be deployed at the edge. Not until now, at least. Enter TinyML. Tiny machine learning (TinyML) is broadly defined as a fast growing field of machine learning technologies and applications including hardware, algorithms and software capable of performing on-device sensor data analytics at extremely low power, typically in the mW range and below, and hence enabling a variety of always-on use-cases and targeting battery operated devices. ... First, the working definition of what constitutes TinyML was, and to some extent still is, debated. What matters is how devices can be deployed in the field and how they're going to perform, said Gousev. That will be different depending on the device and the use case, but the point is being always on and not having to change batteries every week. That can only happen in the mW range and below.



Quote for the day:

"If you're relying on luck, you have already given up." -- Gordon Tredgold