Daily Tech Digest - June 13, 2021

The race is on for quantum-safe cryptography

Existing encryption systems rely on specific mathematical equations that classical computers aren’t very good at solving — but quantum computers may breeze through them. As a security researcher, Chen is particularly interested in quantum computing’s ability to solve two types of math problems: factoring large numbers and solving discrete logarithms. Pretty much all internet security relies on this math to encrypt information or authenticate users in protocols such as Transport Layer Security. These math problems are simple to perform in one direction, but difficult in reverse, and thus ideal for a cryptographic scheme. “From a classical computer’s point of view, these are hard problems,” says Chen. “However, they are not too hard for quantum computers.” In 1994, the mathematician Peter Shor outlined in a paper how a future quantum computer could solve both the factoring and discrete logarithm problems, but engineers are still struggling to make quantum systems work in practice. While several companies like Google and IBM, along with startups such as IonQ and Xanadu, have built small prototypes, these devices cannot perform consistently, and they have not conclusively completed any useful task beyond what the best conventional computers can achieve.


Lightbend’s Akka Serverless PaaS to Manage Distributed State at Scale

Up to now, serverless technology has not been able to support stateful, high-performance, scalable applications that enterprises are building today, Murdoch said. Examples of such applications include consumer and industrial IoT, factory automation, modern e-commerce, real-time financial services, streaming media, internet-based gaming and SaaS applications. “Stateful approaches to serverless application design will be required to support a wide range of enterprise applications that can’t currently take advantage of it, such as e-commerce, workflows and anything requiring a human action,” said William Fellows, research director for cloud native at 451 Research. “Serverless functions are short-lived and lose any ‘state’ or context information when they execute.” Lightbend, with Akka Serverless, has addressed the challenge of managing distributed state at scale. “The most significant piece of feedback that we’ve been getting from the beta is that one of the key things that we had to do to build this platform was to find a way to be able to make the data be available in memory at runtime automatically, without the developer having to do anything,” Murdoch said


Can We Balance Accuracy and Fairness in Machine Learning?

While challenges like these often sound theoretical, they already affect and shape the work that machine learning engineers and researchers produce. Angela Shi looks at a practical application of this conundrum when she explains the visual representation of bias and variance in bulls-eye diagrams. Taking a few steps back, Federico Bianchi and Dirk Hovy’s article identifies the most pressing issues the authors and their colleagues face in the field of natural learning processing (NLP): “the speed with which models are published and then used in applications can exceed the discovery of their risks and limitations. And as their size grows, it becomes harder to reproduce these models to discover those aspects.” Federico and Dirk’s post stops short of offering concrete solutions—no single paper could—but it underscores the importance of learning, asking the right (and often most difficult) questions, and refusing to accept an untenable status quo. If what inspires you to take action is expanding your knowledge and growing your skill set, we have some great options for you to choose from this week, too.


The secret of making better decisions, faster

While agility might be critical for sporting success, that doesn't mean it's easily achieved. Filippi tells ZDNet he's spent many years building a strong team, with great heads of department who are empowered to make big calls. "Most of the time you trust them to get on with it," he says. "I'm more of an orchestrator – you cannot micromanage a race team because there's just too much going on. The pace and the volume of work being achieved every week is just mind-blowing." Hackland has similar experiences at Williams F1. Employees are empowered to take decisions and their confidence to make those calls in the factory or out on the track is a crucial component of success. "The engineer who's sitting on the pit wall doesn't have to ask the CIO if we should pit," he says. "The decisions that are made all through the organisation don't feed up to one single individual. Everyone is allowed to make decisions up or down the organisation." As well as being empowered to make big calls, Hackland says a no-blame culture is critical to establishing and supporting decentralised decision making in racing teams.


How to avoid the ethical pitfalls of artificial intelligence and machine learning

Disconnects also exist between key functional stakeholders required to make sound holistic judgements around ethics in AI and ML. “There is a gap between the bit that is the data analytics AI, and the bit that is the making of the decision by an organisation. You can have really good technology and AI generating really good outputs that are then used really badly by humans, and as a result, this leads to really poor outcomes,” says Prof. Leonard. “So, you have to look not only at what the technology in the AI is doing, but how that is integrated into the making of the decision by an organisation.” This problem exists in many fields. One field in which it is particularly prevalent is digital advertising. Chief marketing officers, for example, determine marketing strategies that are dependent upon the use of advertising technology – which are in turn managed by a technology team. Separate to this is data privacy which is managed by a different team, and Prof. Leonard says each of these teams don’t speak the same language as each other in order to arrive at a strategically cohesive decision. 


Five types of thinking for a high performing data scientist

As data scientists, the first and foremost skill we need is to think in terms of models. In its most abstract form, a model is any physical, mathematical, or logical representation of an object, property, or process. Let’s say we want to build an aircraft engine that will lift heavy loads. Before we build the complete aircraft engine, we might build a miniature model to test the engine for a variety of properties (e.g., fuel consumption, power) under different conditions (e.g., headwind, impact with objects). Even before we build a miniature model, we might build a 3-D digital model that can predict what will happen to the miniature model built out of different materials. ... Data scientists often approach problems with cross-sectional data at a point in time to make predictions or inferences. Unfortunately, given the constantly changing context around most problems, very few things can be analyzed statically. Static thinking reinforces the ‘one-and-done’ approach to model building that is misleading at best and disastrous at its worst. Even simple recommendation engines and chatbots trained on historical data need to be updated on a regular basis. 


Double Trouble – the Threat of Double Extortion Ransomware

Over the past 12 months, double extortion attacks have become increasingly common as its ‘business model’ has proven effective. The data center giant Equinix was hit by the Netwalker ransomware. The threat actor behind that attack was also responsible for the attack against K-Electric, the largest power supplier in Pakistan, demanding $4.5 million in Bitcoin for decryption keys and to stop the release of stolen data. Other companies known to have suffered such attacks include the French system and software consultancy Sopra Steria; the Japanese game developer Capcom; the Italian liquor company Campari Group; the US military missile contractor Westech; the global aerospace and electronics engineering group ST Engineering; travel management giant CWT, who paid $4.5M in Bitcoin to the Ragnar Locker ransomware operators; business services giant Conduent; even soccer club Manchester United. Research shows that in Q3 2020, nearly half of all ransomware cases included the threat of releasing stolen data, and the average ransom payment was $233,817 – up 30% compared to Q2 2020. And that’s just the average ransom paid.


Evolution of code deployment tools at Mixpanel

Manual deploys worked surprisingly well while we were getting our services up and running. More and more features were added to mix to interact not just with k8s but also other GCP services. To avoid dealing with raw YAML files directly, we moved our k8s configuration management to Jsonnet. Jsonnet allowed us to add templates for commonly used paradigms and reuse them in different deployments. At the same time, we kept adding more k8s clusters. We added more geographically distributed clusters to run the servers handling incoming data to decrease latency perceived by our ingestion API clients. Around the end of 2018, we started evaluating a European Data Residency product. That required us to deploy another full copy of all our services in two zones in the European Union. We were now up to 12 separate clusters, and many of them ran the same code and had similar configurations. While manual deploys worked fine when we ran code in just two zones, it quickly became infeasible to keep 12 separate clusters in sync manually. Across all our teams, we run more than 100 separate services and deployments. 


When physics meets financial networks

Generally, physics and financial systems are not easily associated in people's minds. Yet, principles and techniques originating from physics can be very effective in describing the processes taking place on financial markets. Modeling financial systems as networks can greatly enhance our understanding of phenomena that are relevant not only to researchers in economics and other disciplines, but also to ordinary citizens, public agencies and governments. The theory of Complex Networks represents a powerful framework for studying how shocks propagate in financial systems, identifying early-warning signals of forthcoming crises, and reconstructing hidden linkages in interbank systems. ... Here is where network theory comes into play, by clarifying the interplay between the structure of the network, the heterogeneity of the individual characteristics of financial actors and the dynamics of risk propagation, in particular contagion, i.e. the domino effect by which the instability of some financial institutions can reverberate to other institutions to which they are connected. The associated risk is indeed "systemic", i.e. both produced and faced by the system as a whole, as in collective phenomena studied in physics.


What’s Driving the Surge in Ransomware Attacks?

The trend involves a complex blend of geopolitical and cybersecurity factors, but the underlying reasons for its recent explosion are simple. Ransomware attacks have gotten incredibly easy to execute, and payment methods are now much more friendly to criminals. Meanwhile, businesses are growing increasingly reliant on digital infrastructure and more willing to pay ransoms, thereby increasing the incentive to break in. As the New York Times notes, for years “criminals had to play psychological games to trick people into handing over bank passwords and have the technical know-how to siphon money out of secure personal accounts.” Now, young Russians with a criminal streak and a cash imbalance can simply buy the software and learn the basics on YouTube tutorials, or by getting help from syndicates like DarkSide — who even charge clients a fee to set them up to hack into businesses in exchange for a portion of the proceeds. The breach of the education publisher involving the false pedophile threat was a successful example of such a criminal exchange. Meanwhile, Bitcoin has made it much easier for cybercriminals to collect on their schemes.



Quote for the day:

"To make a decision, all you need is authority. To make a good decision, you also need knowledge, experience, and insight." -- Denise Moreland

Daily Tech Digest - June 12, 2021

Data Architecture: One Size Does Not Fit All

“There’s no one right way,” said George Yuhasz, VP and Head of Enterprise Data at NewRez, because the demands and the value that Data Architecture practices bring to an organization are as varied as the number of firms trying to get value from data. Yuhasz was speaking at DATAVERSITY® Data Architecture Online Conference. The very definition of Data Architecture varies as well, he says, so get clarity among stakeholders to understand the constraints and barriers in which Data Architecture needs to fit. Will the organization prioritize process alone? Or process, platforms, and infrastructure? Or will it be folded into a larger enterprise architecture? Without a clear definition, it’s impossible to determine key success criteria, or to know what success is, both in the short term and long term. The definition should be simple enough to be understood by a diverse group of stakeholders, and elegant enough to handle sophistication and nuance. Without it, he said, the tendency will be to “drop everything that even relates to the term ‘data’ onto your plate.”


Chrome zero-day, hot on the heels of Microsoft’s IE zero-day. Patch now!

This bug is listed as a “type confusion in V8“, where V8 is the part of Chrome that runs JavaScript code, and type confusion means that you can feed V8 one sort of data item but trick JavaScript into handling it as if it were something else, possibly bypassing security checks or running unauthorised code as a result. For example, if your code is doing JavaScript calculations on a data object that has a memory block of 16 bytes allocated to it, but you can trick the JavaScript interpreter into thinking that you are working on an object that uses 1024 bytes of memory, you can probably end up sneakily writing data outside the official 16-byte allocation, thus pulling off a buffer overflow attack. And, as you probably know, JavaScript security holes that can be triggered by JavaScript code embedded in a web page often result in RCE exploits, or remote code execution. That’s because you’re relying on your browser’s JavaScript engine to keep control over what is essentially unknown and untrusted programming downloaded and executed automatically from an external source. 


New quantum repeaters could enable a scalable quantum internet

If it can be built, a quantum internet would allow calculations to be distributed between multiple quantum computers – allowing larger and more complex problems to be solved. A quantum internet would also provide secure communications because eavesdropping on the exchange of quantum information can be easily identified. The backbone of such a quantum network would be quantum-mechanically entangled links between different network points, called nodes. However, creating entangled links over long distances at high data rates remains a challenge. A big problem is that quantum information becomes degraded as it is transmitted, and the rules of quantum mechanics do not allow signals to be amplified by conventional repeater nodes. The solution could be quantum repeaters, which can amplify quantum signals while still obeying quantum physics. Now, two independent research groups — one at the Institute of Photonic Sciences (ICFO) in Spain and the other at the University of Science and Technology of China (USTC) – have shown how quantum memories (QM) offer a path towards practical quantum repeaters.


3 Mindsets High-Performing Business Leaders Use to Create Growth

Success in life comes from understanding that a lot will happen to you outside of your control. As humans, we have emotions and feelings — they tend to take over when something happens to us that's outside of our control. When you focus on the things you can't control, you put yourself in a dark place that threatens to spiral your mind. High-performing entrepreneurs don't invest time, energy and emotion into situations that are outside of their control. Growth-focused business leaders make a deliberate effort to optimize their mind, body and spirit. They do the work to operate in a peak state and learn the techniques to get back into a peak state when they feel themselves slipping. ... Authentic business leadership means you create wealth through purposeful work and the desire to build a legacy. You need a vision for where you're going if you plan to get there and experience the benefits of entrepreneurship. Whether it's setting up a vision board or having your goals displayed on your phone's screensaver, you grow when you have a vision and implement growth strategies consistently. 


Has Serverless Jumped the Shark?

Today’s hyped technologies — such as functions-as-a-service offerings like Amazon Lambda, serverless frameworks for Kubernetes like Knative and other non-FaaS serverless solutions like database-as-a-service (DBaaS) — are the underpinnings of more advanced delivery systems for digital business. New methods of infrastructure delivery and consumption, such as cloud computing, are as much a cultural innovation as a technological one, most obviously in DevOps. Even with these technological innovations, companies will still consume a combination of legacy application data, modern cloud services and other serverless architectures to accomplish their digital transformation. The lynchpin isn’t a wholesale move to new technologies, but rather the ability to integrate these technologies in a way that eases the delivery of digital experiences and is invisible to the end-user. Serverless hasn’t jumped the shark. Rather, it is maturing. The Gartner Hype Cycle, a graphic representation of the maturity and adoption of technologies and applications, forecast in 2017 that it would take two to five years for serverless to move from a point of inflated expectations to a plateau of productivity.


Opinion: Andreessen Horowitz is dead wrong about cloud

In The Cost of Cloud, a Trillion-Dollar Paradox, Andreessen Horowitz Capital Management’s Sarah Wang and Martin Casado highlighted the case of Dropbox closing down its public cloud deployments and returning to the datacenter. Wang and Casado extrapolated the fact that Dropbox and other enterprises realized savings of 50% or more by bailing on some or all of their cloud deployments in the wider cloud-consuming ecosphere. Wang and Casado’s conclusion? Public cloud is more than doubling infrastructure costs for most enterprises relative to legacy data center environments. ... Well-architected and well-operated cloud deployments will be highly successful compared to datacenter deployments in most cases. However, “highly successful” may or may not mean less expensive. A singular comparison between the cost of cloud versus the cost of a datacenter shouldn’t be made as an isolated analysis. Instead, it’s important to analyze the differential ROI of one set of costs versus the alternative. While this is true for any expenditure, it’s doubly true for public cloud, since migration can have profound impacts on revenue. Indeed, the major benefits of the cloud are often related to revenue, not cost.


What Is Penetration Testing -Strategic Approaches & Its Types?

Social engineering acts as a crucial play in penetration testing. It is such a test that proves the Human Network of an organization. This test helps secure an attempt of a potential attack from within the organization by an employee looking to start a breach or an employee being cheated in sharing data. This kind of test has both remote penetration test and physical penetration test, which aims at most common social engineering tactics used by ethical hackers like phishing attacks, imposters, tailgating, pre-texting, gifts, dumpster diving, eavesdropping, to name a few. Mainly organizations need penetration testing professionals and need minimum knowledge about it to secure the organization from cyberattacks. They use different approaches to find the attacks and defend them. And they are five types of penetration testing: network, web application, client-side, wireless network, and social engineering penetration tests. One of the best ways to learn penetration testing certifications is EC-Council Certified Penetration Testing Professional or CPENT is one of the best courses to learn penetration testing. In working in flat networks, this course boosts your understanding by teaching how to pen test OT and IoT systems ...


Global chip shortage: How manufacturers can cope over the long term

Although it is very important to shift more production to the U.S., there are challenges in doing so, Asaduzzaman said. "Currently the number of semiconductor fabrication foundries in the U.S. is not adequate. If we help overseas-based companies to build factories here, that will be good. But we definitely don't want to send all our technology production overseas and then have no control. That will be a big mistake." However, one ray of hope is that policy provisions may encourage domestic production of semiconductors, Asaduzzaman said. "For instance, regulations could require U.S. companies that buy semiconductors to purchase a certain percentage from domestic producers. Industries have to use locally produced chips to make sure that local chip industries can sustain." Asaduzzaman called it "insulting and incorrect" that some overseas chip manufacturers believe the U.S. doesn't have the skills and cannot keep the cost low to compete with others in the industry. "We are the ones who invented the chip technology; now we are depending on overseas companies for chips," he said.


Complexity is the biggest threat to cloud success and security

Enterprises hit the “complexity wall” soon after deployment when they realize the cost and complexity of operating a complicated and widely distributed cloud solution outpaces its benefits. The number of moving parts quickly becomes too heterogeneous and thus too convoluted. It becomes obvious that organizations can’t keep the skills around to operate and maintain these platforms. Welcome to cloud complexity. Many in IT blame complexity on the new array of choices developers have when they build systems within multicloud deployments. However, enterprises need to empower innovative people to build better systems in order to build a better business. Innovation is just too compelling of an opportunity to give up. If you place limits on what technologies can be employed just to avoid operational complexity, odds are you’re not the best business you can be. Security becomes an issue as well. Security experts have long known that more vulnerabilities exist within a more complex technology solution (the more physically and logically distributed and heterogeneous). 


Limits to Blockchain Scalability vs. Holochain

It is worth noting that it would probably take years to get all of Twitter’s users to migrate over to Holochain to host themselves, even if Twitter switched their infrastructure to this kind of decentralized architecture. This is where the Holo hosting framework comes in. Holo enables Holochain apps, that would normally be self-hosted, to be served to a web user from a peer hosting network. In other words, if your users just expect to open their browser, type in a web address, and have access to your app, you may need to provide them with a hosted version. Holo has a currency which pays hosts for the hosting power they provide for Holochain apps that still need to serve mainstream web users. So instead of paying Amazon or Google to host your app, you pay a network of peer hosting providers in the HoloFuel cryptocurrency. Instead of gas fees costing over 1 billion times what it would cost to host on Amazon (as it does on Ethereum), we expect Holo hosting to have more competitive pricing to current cloud providers because of the low demand on system resources as outlined above.



Quote for the day:

“Doing the best at this moment puts you in the best place for the next moment” -- Oprah Winfrey

Daily Tech Digest - June 11, 2021

Why API Quality Is Top Priority for Developers

Processes such as chaos engineering, load testing and manual quality assurance can uncover situations where an API is failing to handle unexpected situations. Deploying your API to a cloud provider with a compelling SLA instead of your own hardware and network shifts the burden of infrastructure resiliency to a service, freeing your time to build features for your customers. A comprehensive suite of automated tests isn’t always sufficient to provide a robust API. Edge cases, unexpected code branches and other unplanned behavior may be triggered by requests that were not considered when writing the test suite. Traditional automated tests should be complimented by fuzz testing to help uncover hidden execution paths. ... It is expected that most APIs are built on layers of open source libraries and frameworks. Software composition analysis is a necessity to stay on top of zero-day vulnerabilities by identifying vulnerable dependencies as soon as they are discovered. OWASP guidance is a must-have—directing API developers to implement attack mitigation strategies such as CORS and CSRF protection. Application logic must be well tested for authorization and authentication.


New Ransomware Group Claiming Connection to REvil Gang Surfaces

Like many established ransomware operators, the gang behind Prometheus has adopted a very professional approach to dealing with its victims — including referring to them as "customers," PAN said. Members of the group communicate with victims via a customer service ticketing system that includes warnings on approaching payment deadlines and notifications of plans to sell stolen data via auction if the deadline is not met. "New ransomware gangs like Prometheus follow the same TTPs as big players [such as] Maze, Ryuk, and NetWalker because it is usually effective when applied the right way with the right victim," Santos says. "However, we do find it interesting that this group sells the data if no ransom is paid and are very vocal about it." From samples provided by the Prometheus ransomware gang on their leak site, the group appears to be selling stolen databases, emails, invoices, and documents that include personally identifiable information. "There are marketplaces where threat actors can sell leaked data for a profit, but we currently don't have any insight on how much this information could be sold in a marketplace," Santos says


Google is using AI to design its next generation of AI chips more quickly than humans can

Google’s engineers trained a reinforcement learning algorithm on a dataset of 10,000 chip floor plans of varying quality, some of which had been randomly generated. Each design was tagged with a specific “reward” function based on its success across different metrics like the length of wire required and power usage. The algorithm then used this data to distinguish between good and bad floor plans and generate its own designs in turn. As we’ve seen when AI systems take on humans at board games, machines don’t necessarily think like humans and often arrive at unexpected solutions to familiar problems. When DeepMind’s AlphaGo played human champion Lee Sedol at Go, this dynamic led to the infamous “move 37” — a seemingly illogical piece placement by the AI that nevertheless led to victory. Nothing quite so dramatic happened with Google’s chip-designing algorithm, but its floor plans nevertheless look quite different to those created by a human. Instead of neat rows of components laid out on the die, sub-systems look like they’ve almost been scattered across the silicon at random.


More and More Professionals Are Using Their Career Expertise to Launch Entrepreneurial Ventures

The first step is to immerse yourself within your training and specialty and have the confidence to be a key thought leader in the space. Do the extra research, spend the time to learn all of the new information and data in your field to truly understand the opportunity within. "I have been fortunate to be involved with several top academic institutions during my training. While the training was fantastic, there were areas that I felt could be improved for the ultimate outcome of increased access to high-quality healthcare," says Dr. Bajaj. "Thankfully, this vision has resulted in great outcomes and happy patients." ... "Ready. Fire. Aim!" as Dr. Bajaj puts it, "Time was not waiting for me to be fully prepared. Sometimes you have to take the leap." In entrepreneurship, there are no guarantees, which is quite different from some of the career paths that we have trained for our entire academic life. Guaranteed salary, retirement plans, and annual bonuses are far from promised in your own business, and it is important to adapt accordingly. Everything will not go according to plan, and it is important to find comfort with that. As long the launchpad for growth has been established - patience is the biggest challenge, not security.


CISOs: It's time to get back to security basics

The goal of cybersecurity used to be protecting data and people's privacy, Summit said. There has been a major shift in that thinking. "It's one thing to lose a patient's data, which is extremely important to protect, but when you start interrupting" people's ability to travel or the food supply chain, "you have a whole different level of problems … It's not just about protecting data but your operations. That's where major changes are starting to occur." Summit added that he has long said if companies were making cybersecurity a high priority long before now, "we wouldn't be in this position" and facing government scrutiny. The cybersecurity field is "incredibly dynamic," Hatter said, and CISOs don't have the luxury of planning out three to five years. "We want to create and deploy a strategy that's sound and solid. But market forces demand; we recalibrate what we do and COVID-19 was a great example of that." CISOs now have to have as resilient a strategy as possible but be prepared to make changes. Managed security service providers can help, Summit said, but CISOs are still feeling overwhelmed.


New quantum entanglement verification method cuts through the noise

Virtually any interaction with their environment can cause them to collapse like a house of cards and lose their quantum correlations – a process called decoherence. If this happens before an algorithm finishes running, the result is a mess, not an answer. (You would not get much work done on a laptop that had to restart every second.) In general, the more qubits a quantum computer has, the harder they are to keep quantum; even today’s most advanced quantum processors still have fewer than 100 physical qubits. The solution to imperfect physical qubits is quantum error correction (QEC). By entangling many qubits together in a so-called “genuine multipartite entangled” (GME) state, where every qubit is entangled with every other qubit in that bunch, it is possible to create a composite “logical” qubit. This logical qubit acts as an ideal qubit: the redundancy of the shared information means if one of the physical qubits decoheres, the information can be recovered from the rest of the logical qubit. Developing quantum error-correcting systems requires verifying that the GME states used in logical qubits are present and working as intended, ideally as quickly and efficiently as possible.


DeepMind says reinforcement learning is ‘enough’ to reach general AI

Some scientists believe that assembling multiple narrow AI modules will produce higher intelligent systems. For example, you can have a software system that coordinates between separate computer vision, voice processing, NLP, and motor control modules to solve complicated problems that require a multitude of skills. A different approach to creating AI, proposed by the DeepMind researchers, is to recreate the simple yet effective rule that has given rise to natural intelligence. “[We] consider an alternative hypothesis: that the generic objective of maximising reward is enough to drive behaviour that exhibits most if not all abilities that are studied in natural and artificial intelligence,” the researchers write. This is basically how nature works. As far as science is concerned, there has been no top-down intelligent design in the complex organisms that we see around us. Billions of years of natural selection and random variation have filtered lifeforms for their fitness to survive and reproduce. Living beings that were better equipped to handle the challenges and situations in their environments managed to survive and reproduce. The rest were eliminated.


Evaluation of Cloud Native Message Queues

The significant rise in internet-connected devices will consequently have a substantial influence on systems’ network traffic, and current point-to-point technologies using synchronous communication between end-points in IoT-systems are not any longer a sustainable solution. Message queue architectures using the publish-subscribe paradigm are widely implemented in event-based systems. This paradigm uses asynchronous communication between entities and conforms to scalable, high throughput, and low latency systems that are well adapted within the IoT-domain. This thesis evaluates the adaptability of three popular message queue systems in Kubernetes. The systems are designed differently, where e.g. the Kafka system is using a peer-to-peer architecture while STAN and RabbitMQ use a master-slave architecture by applying the Raft consensus algorithm. A thorough analysis of the systems’ capabilities in terms of scalability, performance, and overhead are presented. The conducted tests give further knowledge on how the performance of the Kafka system is affected in multi-broker clusters using multiple number of partitions, enabling higher levels of parallelism for the system. 


Mysterious Custom Malware Collects Billions of Stolen Data Points

Researchers have uncovered a 1.2-terabyte database of stolen data, lifted from 3.2 million Windows-based computers over the course of two years by an unknown, custom malware. The heisted info includes 6.6 million files and 26 million credentials, and 2 billion web login cookies – with 400 million of the latter still valid at the time of the database’s discovery. According to researchers at NordLocker, the culprit is a stealthy, unnamed malware that spread via trojanized Adobe Photoshop versions, pirated games and Windows cracking tools, between 2018 and 2020. It’s unlikely that the operators had any depth of skill to pull off their data-harvesting campaign, they added. “The truth is, anyone can get their hands on custom malware. It’s cheap, customizable, and can be found all over the web,” the firm said in a Wednesday posting. “Dark Web ads for these viruses uncover even more truth about this market. For instance, anyone can get their own custom malware and even lessons on how to use the stolen data for as little as $100. And custom does mean custom – advertisers promise that they can build a virus to attack virtually any app the buyer needs.”


Get your technology infrastructure ready for the Age of Uncertainty

As I say, it’s by no means clear what happens next and how ingrained changes will be. It’s plausible, of course, that we largely go back to old habits although that seems unlikely with a groundswell of employees having become accustomed to a different way of life and a different way of working. And it’s worth noting that even evidence of a return to ancient ways of living in the form of crafts, baking and so on are now very much digitally imbued activities. We download apps, consult websites and share ideas on forums when we try out a new recipe, and this sort of binary activity is part of the fabric of life because it is faster, more convenient and more scalable than the older alternatives. But what we need to do is strike the perfect balance between technology-enabled agility and what we want to do with our time. What we will need to manage through change is clear though. Adaptivity, enabled by robust, data-centric digital business designs, will become the watchword of operations. In other words, companies will need to be able to move fast, whatever happens, changing operating models, moving into adjacent markets and generally taking nothing for granted. In the new Age of Uncertainty, legacy systems have to be reassessed in the context of how best to build for agility.



Quote for the day:

"To have long term success as a coach or in any position of leadership, you have to be obsessed in some way." -- Pat Riley