Daily Tech Digest - September 02, 2021

Cyber Security In Cars

ISO/SAE 21434, Road vehicles – Cybersecurity engineering, addresses the cybersecurity perspective in engineering of electrical and electronic (E/E) systems within road vehicles. It will help manufacturers keep abreast of changing technologies and cyber-attack methods, and defines the vocabulary, objectives, requirements and guidelines related to cybersecurity engineering for a common understanding throughout the supply chain. The standard, developed in collaboration with SAE International, a global association of engineers and a key ISO partner, draws on the recommendations detailed in SAE J3061, Cybersecurity guidebook for cyber-physical vehicle systems, offering more comprehensive guidance and the input of experts all around the world. Dr Gido Scharfenberger-Fabian, Convenor of the group of ISO experts that developed the standard, said it will enable organizations to define cybersecurity policies and processes, manage cybersecurity risk and foster a cybersecurity culture. “ISO/SAE 21434 will help consider cybersecurity issues at every stage of the development process and in the field, increasing the vehicle’s own cybersecurity defences and mitigating the risk of potential vulnerabilities for every component,” he said.


Ultimate Guide to Becoming a DevOps Engineer

The job title DevOps Engineer is thrown around a lot and it means different things to different people. Some people claim that the title DevOps Engineer shouldn’t exist, because DevOps is ‘a culture’ or ‘a way of working’—not a role. The same people would argue that creating an additional silo defeats the purpose of overlapping responsibilities and having different teams working together. These arguments are not wrong. In fact, some companies that understand and do DevOps engineering very well don’t even have a role with that name (like Google!). The truth is that whenever you see DevOps Engineer jobs advertised, the ad might actually be for an infrastructure engineer, a systems reliability engineer (SRE), a CI/CD engineer, a sysadmin, etc. So the definition for DevOps engineer is rather broad. One thing that’s certain though is to be a DevOps engineer, you must have a solid understanding of the DevOps culture and practices and you should be able to bridge any communication gaps between teams in order to achieve software delivery velocity. 


WhatsApp fined a record 225 mln euro by Ireland over privacy

A WhatsApp spokesperson said in a statement the issues in question related to policies in place in 2018 and the company had provided comprehensive information. "We disagree with the decision today regarding the transparency we provided to people in 2018 and the penalties are entirely disproportionate," the spokesperson said. EU privacy watchdog the European Data Protection Board said it had given several pointers to the Irish agency in July to address criticism from its peers for taking too long to decide in cases involving tech giants and for not fining them enough for any breaches. It said a WhatsApp fine should take into account Facebook's turnover and that the company should be given three months instead of six months to comply. Europe's landmark privacy rules, known as GDPR, are finally showing some teeth even if the lead regulator for some tech giants appears otherwise, said Ulrich Kelber, Germany's federal commissioner for data protection and freedom of information. "What is important now is that the many other open cases on WhatsApp in Ireland are finally decided on so that we can take faster and longer strides towards the uniform enforcement of data protection law in Europe," he told Reuters.


DevOps, Low-Code and RPA: Pros and Cons

RPA programs enable companies to automate repetitive tasks by creating software scripts using a recorder. For those of us who remember using the macro recorder in Microsoft Excel, it’s a similar concept. Once the script is created, users can then use a visual editor to modify, reorder and edit its steps. Speaking to the growing popularity of these solutions was the UiPath IPO on April 21, 2021, which ended up being one of the largest software IOPs in history. The use cases for RPA programs are unlimited—any repetitive task done via a UI is a candidate. RPA is an area where we’ve seen an intersection of business-user designed apps (UiPath and Blue Prism) with more traditional DevOps tools specifically in the test automation space (Tricentis, Worksoft, and Egglplant) and new conversational-based solutions like Krista. In the case of test automation, a lightweight recorder is given to a business user who can then record a business process. The recording is then fed to the automation team, which creates a hardened test case that in turn is fed into a CI/CD system.


IBM quantum computing: From healthcare to automotive to energy, real use cases are in play

Quantum computers are better at that than classical computers, Utz said. Anthem is running different models on IBM's quantum cloud. Right now, company officials are building a roadmap around how Anthem wants to deliver its platform using quantum technology, so "I can't say quantum is ready for primetime yet," Utz said. "The plan is to get there over the next year or so and have something working in production." A good place to start with anomaly detection is in finding fraud, he said. "Classical computers will tap out at some point and can't get to the same place as quantum computers." Other use cases are around longitudinal population health modeling, meaning that as Anthem looks at providing more of a digital platform for health, one of the challenges is that there is "almost an infinite number of relationships," he said. This includes different health conditions, providers patients see, outcomes and figuring out where there are outliers, he said. "There's only so much a classical system can do there, so we're looking for more opportunities to improve healthcare for our members and the population at large," and the ability to proactively predict risk, Utz said. 


How to Implement Domain-Driven Design (DDD) in Golang

Domain-Driven Design is a way of structuring and modeling the software after the Domain it belongs to. What this means is that a domain first has to be considered for the software that is written. The domain is the topic or problem that the software intends to work on. The software should be written to reflect the domain. DDD advocates that the engineering team has to meet up with the Subject Matter Experts, SME, which are the experts inside the domain. The reason for this is because the SME holds the knowledge about the domain and that knowledge should be reflected in the software. It makes pretty much sense when you think about it, If I were to build a stock trading platform, do I as an engineer know the domain well enough to build a good stock trading platform? The platform would probably be a lot better off if I had a few sessions with Warren Buffet about the domain The architecture in the code should also reflect on the domain.

 

China’s Personal Information Protection Law and Its Global Impact

The law’s restrictions on cross-border data transfers may not affect retailers that operate domestically, and hence have no need to transfer information abroad. However, the story is vastly different for two types of companies: those in possession of large amount of personal information and those in possession of information on critical infrastructure. Moreover, PIPL declares that the authority of domestic regulators supersedes that of international treaties. PIPL will help foreign companies operating in China without cross-border data transfers to develop privacy policies in compliance with the law. Before PIPL, the lack of a domestic PI protection law led to the broad adoption of the EU’s GDPR as a privacy policy among foreign companies. However, the GDPR’s decision-making is based on agreements among EU member states, which does not apply in the case of China. Since PIPL will come into effect in November 2021, foreign firms in China will need to revise their privacy policies to fit the requirements of the new law.


10 Characteristics of an AI-Powered Enterprise

Digital transformation makes the inclusion of AI as part of the business strategy even more important than it would be otherwise because digital organizations are software companies. Since commercial applications and tools are increasingly taking advantage of AI, the logical development by extension is AI embedded in enterprise-built applications. After all, businesses are moving more data and compute to the cloud and their new applications are being designed as cloud-first applications. Of course, AI and machine learning tooling is also available in the cloud, so developers have what they need to build “intelligent” applications. AI and machine learning don't just work, however. They require testing and monitoring. “Losing trust in AI-infused applications is a high risk for AI-based innovation,” said Diego Lo Giudice, VP and principal analyst at Forrester, in a blog post. “Forrester Analytics data shows that 73% of enterprises claim to be adopting AI for building new solutions in 2021, up from 68% in 2020, and testing those AI-infused applications becomes even more critical.” Trust and safety are things that need to be proven through testing.


Why Rust is the best language for IoT development

Internet of Things (IoT) technology is rapidly terraforming the landscape of modern society right in front of our very eyes, and propelling us all into the future. It does this by providing solutions to everything from tracking your daily personal fitness goals with an Apple watch, to completely revolutionising the entire transport sector. These devices connect to each other and form the great network required for something like a digital twin; they are constantly collating data in real time from the surrounding environment which means that the system is always using entirely current information. As amazing and powerful as this technology is, it is slightly held back by the fact that, by their very nature, IoT devices have far less processing power than your average piece of equipment. This requires a much more efficient code to be written to fully take advantage of its raw potential without affecting the device’s performance. This is where Rust comes into the picture as one of the very few languages that can provide a faster runtime for IoT technology.


Are Tesla’s Dojo supercomputer claims valid?

The D1, according to Tesla, features 362teraFLOPS of processing power. This means it can perform 362 trillion floating-point operations per second (FLOPS), Tesla says. Now imagine harnessing the processing power of 25 D1 chips into a training tile, and then linking together 120 training tiles through multiple servers. That’s what Tesla is doing with the Dojo supercomputer for its autonomous cars. And with each training tile containing 9PFLOPS of computing power, Dojo has (by my possibly inaccurate calculations) 1.08 exaFLOPS of power under its hood (Tesla calls it 1.1EFLOPS). That kind of horsepower would make Dojo more than twice as fast as the currently acknowledged fastest supercomputer in the world, Fugaku. Built by Fujitsu, this supercomputer reaches speeds of 442PFLOPS. Supercomputers already are being used to accelerate medical research and drug development because they are capable of quickly processing massive amounts of data. Indeed, researchers have relied on supercomputers to power COVID-19 research since the pandemic began in early 2020.



Quote for the day:

"Great leaders go forward without stopping, remain firm without tiring and remain enthusiastic while growing." -- Reed Markham

Daily Tech Digest - September 01, 2021

Top 3 API Vulnerabilities: Why Apps are Pwned by Cyberattackers

2021 is already the year of the API security incident, and the year is not over. API flaws impact the entire business – not just dev, or security or the business groups. Finger-pointing has never fixed the problem. The fix begins with collaboration; development needs a full understanding from business groups on how the API should function. API coding is different, so a refresh on secure coding practices is warranted. And security needs to be involved upfront, to help uncover gaps before publication. A great place to start is with the OWASP. It has published the API Security Top 10 and recently published the Completely Ridiculous API, which includes examples of bad APIs in an application. Organizations can use the Completely Ridiculous API online or in-house as an educational platform to train development and security on the errors to avoid when utilizing APIs. Whether you are utilizing an “API-first approach” or just starting your journey into digital transformation aided by APIs, knowing the vulnerabilities that are out there and what might happen if something is missed, is crucial.


How Tech Leaders Can Leverage Their Mentoring and Teaching with Coaching

Putting the focus on the other person means that we are encouraging them to do all of the work of coming up with a solution. We refrain from asking information gathering questions and instead ask questions that will help them solve the problem on their own. After all, anything that they have an answer to ... they already know! We want to help them make new connections in order to come up with new ideas that they didn’t have when they started talking to us. We also refrain from sharing our thoughts and opinions until they ask us for them directly or it is clear that they could benefit from some information that we have that they don’t. To aid in this, consider saying something early on in your conversation like, "I’m going to put my coaching hat on. I’m happy to share my expertise with you, but prefer to explore a bit first. If we get to the point where you really want to know my thoughts or I think of something that may be helpful to share, I can switch to my ‘expert’ hat."


All About Waymo’s AI-Powered Urban Driver

Waymo’s driving software is based on years of AI research, their Waymo Open Dataset Initiative, and research team Google Brain. The engineers working at Waymo operate in coordination with the Google Brain team to apply deep nets to the car’s pedestrian detection system. The team has created a robust, generalisable tech stack based on their operation in multiple environments and cities. The Waymo Driver has learnt to behave assertively and merge into traffic based on this experience. Waymo has invested in creating training softwares for the Waymo Driver. The Simulation City is software to test the autonomous vehicles and assess their performance for the cities Waymo is present in. It creates realistic conditions like spring showers, solar glare, or dimming light for the technology to experience; the researchers further learn from the system’s reactions. ... The Waymo Driver itself is trained with a highly nuanced understanding of city roads with driving experience of more than 20 million miles on public roads and 20 million miles in simulation. It can adapt to the local driving conditions accurately, given this training. 


Security engineer job requirements, certifications, and salary

IT has traditionally been a field that values skills over paper credentials—we all know the stories of tech pioneers who dropped out of high school—but that's changed over the years as the industry has become more professionalized. That said, most hiring managers do value experience and demonstrated skills, and if you can put together that sort of resume, that can help make up for a non-technical undergraduate degree. At any rate, nobody would make an immediate leap from college to a security engineer gig; you would need to pass through an introductory phase of your career first, possibly as a security analyst. One way to signal to your employer or potential future employers that you're ready to advance to a security engineer job is by pursuing some relevant formal certifications. ... One thing to keep in mind is that, while this is a tech job, it's not a job that's limited to the tech industry: just about every company that's larger than a handful of people, in every sector, needs security engineers. Government agencies and financial institutions in particular have a great need for security engineers, but you could also find yourself working in manufacturing or retail as well.


Why should I choose Quarkus over Spring for my microservices?

Quarkus can automatically detect changes made to Java and other resource and configuration files, then transparently re-compile and re-deploy the changes. Usually, within a second, you can view your application’s output or compiler error messages. This feature can also be used with Quarkus applications running in a remote environment. The remote capability is useful where rapid development or prototyping is needed but provisioning services in a local environment isn’t feasible or possible. Quarkus takes this concept a step further with its continuous testing feature to facilitate test-driven development. As changes are made to the application source code, Quarkus can automatically rerun affected tests in the background, giving developers instant feedback about the code they are writing or modifying. ... From the beginning, Quarkus was designed around Kubernetes-native philosophies, optimizing for low memory usage and fast startup times. As much processing as possible is done at build time. Classes used only at application startup are invoked at build time and not loaded into the runtime JVM, reducing the size, and ultimately the memory footprint, of the application running on the JVM.


Sustainable transformation of agriculture with the Internet of Things

With the urgency to prevent environmental degradation, reduce waste and increase profitability, farmers around the globe are increasingly opting for more efficient crop management solutions supported by optimization and controlling technologies derived from the Industrial Internet of Things (IIoT). Intelligent information and communication technologies (IICT) (machine Learning (ML), AI, IoT, cloud-based analytics, actuators, and sensors) are being implemented to achieve higher control of spatial and temporal variabilities with the aid of satellite remote sensing. The use and application of this set of related technologies are known as “Smart Agriculture.” In SA, real-time and continuous monitoring of weather, crop growth, plant physical/chemical variables, and other critical environmental factors allow the optimization of yield production, reduction of labor, and improvement of farming products. Practices such as irrigation management, resource management, production, or fertilization operations are being facilitated by integrating IoT systems capable of providing information about multiple crop factors.


Mainframes, ML and digital transformation

Moving from mainframes to client-server didn't just mean you went from renting one kind of box to buying another - it changed the whole way that computing worked. In particular, software became a separate business, and there were all sorts of new companies selling you new kinds of software, some of which solved existing problems but some of which changed how a company could operate. SAP made just-in-time supply chains a lot easier, and that enabled Zara, and Tim Cook’s Apple. New categories of software enabled new ways of doing business. The same shift is happening now, as companies move to the cloud - you go from owning boxes to renting them (perhaps), but more importantly you change what kinds of software you can use. If buying software means a URL, a login and a corporate credit card instead of getting onto the global IT department’s datacenter deployment schedule for sometime in the next three years, then you can have a lot more software from a lot more companies.


What’s next for data privacy in the UK?

Since the implementation of GDPR, there has been a surge in recruitment for roles like ‘head of data governance and privacy’. It’s time to seize this momentum and move to the next milestone – let’s call it GDPR+. GDPR+ needs to answer the question of how we protect and use data within the country and cross-border. Ideally, we need a Data Privacy Act and a cross-party overseer of the whole process whose remit spans all government departments – a kind of ‘data privacy czar’. Ideally this would be an individual with a strong background in data. The question that needs to be answered is how do we ensure businesses align their practices with any new regulation and handle data responsibly rather than selling it for their own gain? Data fiduciaries could be part of the solution; third-party organisations who are given the legal right to handle private data. But it needs to be a non-political government-funded third party. It’s most likely that the government would outsource any enforcement, but it’s pertinent to ask whether a private company would have the best interests of individual citizens.


Why you want what you want

Great marketers are certainly masters of mimetic manipulation. Burgis points to Edward Bernays, the public relations pioneer, as a prime example. In 1929, when the American Tobacco Company realized that breaking the taboo against women smoking in public could generate beaucoup revenue, it hired Bernays’s firm. He convinced 30 New York City debutantes to join the Easter parade and light up Lucky Strikes—and arranged to have them photographed. The next day, the photos of the debs smoking their “torches of freedom” appeared in newspapers across the country. Sales of Lucky Strikes tripled by the following Easter. ... Much of Wanting is devoted to translating and illustrating Girard’s theories in a consumable way, and Burgis does a fine job at that task. The book’s most salient point, even if it is somewhat opaque, is that leaders choose to pursue what Burgis calls transcendent desire: “Magnanimous, great-spirited leaders are driven by transcendent desire—desire that leads outward, beyond the existing paradigm, because the models are external mediators of desire. These leaders expand everyone’s universe of desire and help them explore it.”


Getting ahead of a major blind spot for CISOs: Third-party risk

As the industry has seen firsthand, even mature and well-established enterprise security teams have a lack of visibility into network hygiene of their branches, offices and contractors abroad due to varying security policies and protocols, management hierarchy and known pain points in franchised-based businesses. The same is applicable to their supply chain, where the level of network hygiene is typically a “black box” or something the third-party is simply not willing to discuss. Acquisition of the quantitative, historical and the most recent indicators of compromise is a vital component of TPRM, providing enterprise organizations actionable information to determine if a counterpart may be compromised with malware and what service may be potentially breached by it. This knowledge enables CISOs to make strategic and tactical decisions, as well as to communicate with other teams, including those responsible for vendor management and supply chain and the organization’s legal team.



Quote for the day:

"Leadership is an ever-evolving position." -- Mike Krzyzewski

Daily Tech Digest - August 31, 2021

LockFile Ransomware Uses Never-Before Seen Encryption to Avoid Detection

The ransomware first exploits unpatched ProxyShell flaws and then uses what’s called a PetitPotam NTLM relay attack to seize control of a victim’s domain, researchers explained. In this type of attack, a threat actor uses Microsoft’s Encrypting File System Remote Protocol (MS-EFSRPC) to connect to a server, hijack the authentication session, and manipulate the results such that the server then believes the attacker has a legitimate right to access it, Sophos researchers described in an earlier report. LockFile also shares some attributes of previous ransomware as well as other tactics—such as forgoing the need to connect to a command-and-control center to communicate–to hide its nefarious activities, researchers found. “Like WastedLocker and Maze ransomware, LockFile ransomware uses memory mapped input/output (I/O) to encrypt a file,” Loman wrote in the report. “This technique allows the ransomware to transparently encrypt cached documents in memory and causes the operating system to write the encrypted documents, with minimal disk I/O that detection technologies would spot.”


How To Prepare for SOC 2 Compliance: SOC 2 Types and Requirements

To be reliable in today’s data-driven world, SOC 2 compliance is essential for all cloud-based businesses and technology services that collect and store their clients’ information. This gold standard of information security certifications helps to ensure your current data privacy levels and security infrastructure to prevent any kind of data breach. Data breaches are all too common nowadays among small to large scale companies across the globe in all sectors. According to PurpleSec, half of all data breaches will occur in the United States by 2023. Experiencing such a breach causes customers to completely lose trust in the targeted company and those who have been through one tend to move their business elsewhere to protect their personal information in the future. SOC 2 compliance can protect from all this pain by improving customer trust in a company with secured data privacy policies. Companies that adhere to the gold standard-level principles of SOC 2 compliance, can provide this audit as evidence of secure data privacy practices. 


6 Reasons why you can’t have DevOps without Test Automation

Digital transformation is gaining traction every single day. The modern consumer is more demanding of quality products and services. Adoption of technologies helps companies stay ahead of the competition. They can achieve higher efficiency and better decision-making. Further, there is room for innovation that aims to meet the needs of customers. All these imply integration, continuous development, innovation, and deployment. All this is achievable with DevOps and the attendant test automation. But, can one exist without the other? We believe not; test automation is a critical component of DevOps, and we will tell you why. ... One of the biggest challenges with software is the need for constant updates. That is the only way to avoid glitches while improving upon what exists. But, the process of testing across many operating platforms and devices is difficult. DevOps processes must execute testing, development, and deployment in the right way. Improper testing can lead to low-quality products. Customers have so many options in the competitive business landscape. 


One Year Later, a Look Back at Zerologon

Netlogon is a protocol that serves as a channel between domain controllers and machines joined to the domain, and it handles authenticating users and other services to the domain. CVE-2020-1472 stems from a flaw in the cryptographic authentication scheme used by the Netlogon Remote Protocol. An attacker who sent Netlogon messages in which various fields are filled with zeroes could change the computer password of the domain controller that is stored in Active Directory, Tervoort explains in his white paper. This can be used to obtain domain admin credentials and then restore the original password for the domain controller, he adds. "This attack has a huge impact: it basically allows any attacker on the local network (such as a malicious insider or someone who simply plugged in a device to an on-premises network post) to completely compromise the Windows domain," Tervoort wrote. "The attack is completely unauthenticated: the attacker does not need any user credentials." Another reason Zerologon appeals to attackers is it can be plugged into a variety of attack chains.


Forrester: Why APIs need zero-trust security

API governance needs zero trust to scale. Getting governance right sets the foundation for balancing business leaders’ needs for a continual stream of new innovative API and endpoint features with the need for compliance. Forrester’s report says “API design too easily centers on innovation and business benefits, overrunning critical considerations for security, privacy, and compliance such as default settings that make all transactions accessible.” The Forrester report says policies must ensure the right API-level trust is enabled for attack protection. That isn’t easy to do with a perimeter-based security framework. Primary goals need to be setting a security context for each API type and ensuring security channel zero-trust methods can scale. APIs need to be managed by least privileged access and microsegmentation in every phase of the SDLC and continuous integration/continuous delivery (CI/CD) Process. The well-documented SolarWinds attack is a stark reminder of how source code can be hacked and legitimate program executable files can be modified undetected and then invoked months after being installed on customer sites.


The consumerization of the Cybercrime-as-a-Service market

Many trends in the cybercrime market and shadow economy mirror those in the legitimate world, and this is also the case with how cybercriminals are profiling and targeting victims. The Colonial Pipeline breach triggered a serious reaction from the US government, including some stark warnings to criminal cyber operators, CCaaS vendors and any countries hosting them, that a ransomware may lead to a kinetic response or even inadvertently trigger a war. Not long after, the criminal gang suspected to be behind the attack resurfaced under a new name, BlackMatter, and advertised that they are buying access from brokers with very specific criteria. Seeking companies with revenue of at least 100 million US dollars per year and 500 to 15,000 hosts, the gang offered $100,000, but also provided a clear list of targets they wanted to avoid, including critical infrastructure and hospitals. It’s a net positive if the criminals actively avoid disrupting critical infrastructure and important targets such as hospitals. 


NGINX Commits to Open Source and Kubernetes Ingress

Regarding NGINX’s open source software moving forward, Whiteley said the company’s executives have committed to a model where open source will be meant for use in production and nothing less. Whiteley even said that, if they were able to go back in time, certain features currently available only in NGINX Plus would be available in the open source version. “One model is ‘open source’ equals ‘test/dev, ‘ ‘commercial’ equals ‘production,’ so the second you trip over into production, you kind of trip over a right-to-use issue, where you then have to start licensing the technology ...” said Whiteley. “What we want to do is focus on, as the application scales — it’s serving more traffic, it’s generating more revenue, whatever its goal is as an app — that the investment is done in lockstep with the success and growth of that.” This first point, NGINX’s stated commitment to open source, serves partly as background for the last point mentioned above, wherein NGINX says it will devote additional resources to the Kubernetes community, a move partly driven by the fact that Alejandro de Brito Fonte, the founder of the ingress-nginx project, has decided to step aside.


How RPA Is Changing the Way People Work

Employees are struggling under the burden of routine, repetitive work but notice the consumers demanding better services and products. Employees expect companies to improve the working environment in the same spirit as improving customer satisfaction. The corporate response, in the form of automation, is expanding the comfort zone of employees. But, there’s a flip side to the RPA coin. With the rise of automation, people fear the consequences of RPA solutions replacing human labor and marginalizing the human touch that was at the core of services and product delivery. Such a threat could seem an exaggeration. RPA removes the drudgery of routine work and sets the stage for workers to play a more decisive role in areas where human touch, care, and creativity are essential. ... With loads of time and better tools at their disposal, employees are more caring and sensitive to the need for making a difference in the lives of customers. More employees are actively unlocking their reservoir of creativity. 


Predicting the future of the cloud and open source

The open source community has also started to dedicate time and effort to resolving some of the world’s most life-threatening challenges. When the COVID-19 pandemic hit, the open source community quickly distributed data to create apps and dashboards that could follow the evolution of the virus. Tech leaders like Apple and Google came together to build upon this technology to provide an open API that could facilitate the development of standard and applications by health organisations around the world, and open hardware designs for ventilators and other critical medical equipment that was in high demand. During lockdown last year, the open source community also launched projects to tackle the climate crisis an increasingly important issue that world leaders are under ever-more pressure to address. One of the most notable developments was the launch of the Linux Foundation Climate Finance Foundation, which aims to provide more funding for game-changing solutions through open source applications.


Pitfalls and Patterns in Microservice Dependency Management

Running a product in a microservice architecture provides a series of benefits. Overall, the possibility of deploying loosely coupled binaries in different locations allows product owners to choose among cost-effective and high-availability deployment scenarios, hosting each service in the cloud or in their own machines. It also allows for independent vertical or horizontal scaling: increasing the hardware resources for each component, or replicating the components which has the benefit of allowing the use of different independent regions. ... Despite all its advantages, having an architecture based on microservices may also make it harder to deal with some processes. In the following sections, I'll present the scenarios I mentioned before (although I changed some real names involved). I will present each scenario in detail, including some memorable pains related to managing microservices, such as aligning traffic and resource growth between frontends and backends. I will also talk about designing failure domains, and computing product SLOs based on the combined SLOs of all microservices. 



Quote for the day:

"Different times need different types of leadership." -- Park Geun-hye

Daily Tech Digest - August 30, 2021

4 signs your company has an innovation-minded culture

It’s important that your organization communicates its values clearly and executes tactics consistently. If your organization values a culture of innovation, communicate that importance while putting your plan into action. This commitment might require you to divert some resources from production at times, but it’s an incredibly worthwhile investment. A workforce that feels valued will help you enjoy the impacts of innovation down the road. Career paths don’t happen in a straight line—by empowering people with tools, training, and resources, they’ll excel in their unique development journey and support a culture of innovation. To invest in our people, we created Zotec University, a learning development platform offering hundreds of custom learning journeys to help participants hone their skills. We also offer a performance development platform that places team members in control of their own career experiences. Creating a culture of innovation takes careful planning, purposeful decision-making, intentionality, and consistent communication. 


What is a chief technology officer? The exec who sets tech strategy

“As companies push to effectively drive technology transformation, we believe there is a very strong push to find technology leaders [who] bring experience and capabilities from hands-on leadership and stewardship of such activities,” Stephenson says. The CTO role naturally requires a strong knowledge of various technologies, and “real technology acumen, especially in the architecture, software, and technology strategy areas to address legacy technology challenges,” Stephenson says. Knowing how technology works is crucial, but it’s also important to be able to explain the business value of a particular technology to C-level colleagues who might not be technically inclined. It’s also vital to be able to see how technology fits with strategic business goals. “Technology vision coupled with strategic thinking beyond technology” is important, says Ozgur Aksakai, president of the Global CTO Forum, an independent, global organization for technology professionals. “There are a lot of technology trends that do not live up to their promises,” Aksakai says. 


Reasons to Opt for a Multicloud Strategy

It is like giving all the critical keys to one person. Do you know the dependency and expectations this creates? Huge. What if you carefully select the best of the best services from different cloud providers? It looks like a feasible solution, and this is how a multicloud strategy works. A multicloud strategy empowers and upgrades a company’s IT systems, performance, cloud deployment, cloud cost optimization and more. The multicloud approach presents a lot of options for the enterprise. For example, some services are more cost-effective from one provider at scale versus those from others. Multicloud avoids vendor lock-in by not depending on only one cloud provider, but by helping companies select the best breed of cloud services from different providers for application workloads. The multicloud pattern provides system redundancy that reduces the hazards of downtimes, if they occur. The multicloud strategy will help companies raise their security bar by selecting the best breed DevSecOps solutions. An organization that implements a multicloud strategy can raise the bar on security, disaster recovery capabilities and increased uptime.


How Blockchain Startups Transform Banking and Payments Industry

Payments industry today has been deeply impacted by the rise of blockchain technology and cryptocurrencies. The legacy system is built upon the inheritance of technologies dating to the advent of credit cards and interbank settlement developed in the mid-1900s for use in centralized, established financial institutions with both institutional as well as retail clients in the era when the post-war fiat money system was the only option for private financial representation. Upon the advent of blockchain technology and cryptocurrencies, it gradually became increasingly clear that the legacy system, while revolutionary in its early days, still is quite inefficient and is designed from the perspective of an institutional client. This leads to relatively limited access to financial services by the majority of the retail market segment. Especially retail clients in developing nations have been hit particularly hard with higher fees, longer processing times for transactions, more invasive and ineffective KYC/AML processes and limited access to technology and thus limited access to all types of financial services.


Cerebras Upgrades Trillion-Transistor Chip to Train ‘Brain-Scale’ AI

A major challenge for large neural networks is shuttling around all the data involved in their calculations. Most chips have a limited amount of memory on-chip, and every time data has to be shuffled in and out it creates a bottleneck, which limits the practical size of networks. The WSE-2 already has an enormous 40 gigabytes of on-chip memory, which means it can hold even the largest of today’s networks. But the company has also built an external unit called MemoryX that provides up to 2.4 Petabytes of high-performance memory, which is so tightly integrated it behaves as if it were on-chip. Cerebras has also revamped its approach to that data it shuffles around. Previously the guts of the neural network would be stored on the chip, and only the training data would be fed in. Now, though, the weights of the connections between the network’s neurons are kept in the MemoryX unit and streamed in during training. By combining these two innovations, the company says, they can train networks two orders of magnitude larger than anything that exists today. 


No-Code Automated Testing: Best Practices and Tools

No-code automated tests are usually at a system or application level, which makes creating a test suite more daunting. It is important not to become fixated on getting 100% test coverage from the get-go. 100% coverage is a great goal, but it can seem so far away when starting out. Instead, we should focus on getting a handful of test cases created and really understanding how the tools we select work. Becoming an expert in our tools is much more beneficial than creating dozens of tests in an unfamiliar tool. It can be tempting to focus on every use case all at once, but it is important to prioritize which use cases to target first. The reality of development and testing is that we may not be able to test every single use case. ... It can be tempting to exercise every nook and cranny of an application, but it is important to start with only the actions the user will take. For example, when testing a login form, it is important to test the fields visible to the user and the login button, since that is what the user will likely interact with in most cases. Testing the edge cases is important, but we should always start with the happy-path before moving onto edge cases.


9 Automated Testing Practices To Avoid Tutorial (Escape Pitfalls)

Most people spend way more time reading source code than writing it, so making your code as easy to read as possible is an excellent decision. It'll never read like Hemingway, but that doesn't mean it can't be readable to anyone but you. Yoni Goldberg considers this the Golden Rule for testing: one must instantly understand the test's intent. You will love yourself (and your team members will pat you on the back) for making your tests readable. When you read those same tests a year down the road, you won't be thinking, “What was I doing?” or “What was this test even for?” If you don't understand what a test is for, you obviously can't use it. And if you can't use a test, what value does it have to you or your team? ... If your new test relies on a successful previous test, you're asking for trouble. If the previous test failed or corrupted the data, any subsequent tests will likely fail or provide incorrect results. Isolating your tests will give you more consistent results, and accurate and consistent results will make your tests worthwhile.


Facilitate collaborative breakthrough with these moves

Vertical facilitation is common and seductive because it offers straightforward and familiar answers to these five questions. In this approach, both the participants and the facilitator typically give confident, superior, controlling answers to the five questions (i.e., they identify one way to reach their goals). In horizontal facilitation, by contrast, participants typically give defiant, defensive, autonomous answers, and the facilitator supports this autonomy. The vertical and horizontal approaches answer the five collaboration questions in opposite ways. In transformative facilitation, the facilitator helps the participants alternate between the two approaches. ... Often, when collaborating, each of the participants and the facilitator starts off with a confident vertical perspective: “I have the right answer.” Each person thinks, “If only the others would agree with me, then the group would be able to move forward together more quickly and easily.” But when members of the group take this position too far or hold it for too long and start to get stuck in rigid certainty, the facilitator needs to help them explore other points of view, a collaboration move I call inquiring. 


Private 5G: Tips on how to implement it, from enterprises that already have

The first rule is your private 5G is a user of your IP network, not an extension of it. Every location you expect to host private 5G cells and every site you expect will have some 5G features hosted will need to be on your corporate VPN, supported by the switches and routers you'd typically use. Since all three private-5G enterprises were using their 5G networks largely for IoT that was focused on some large facilities, that didn’t present a problem for them. It seems likely that most future private 5G adoption will fit the same model, so this rule should be easy to follow overall. The second rule is that 5G control-plane functions will be hosted on servers. 5G RAN and O-RAN control-plane elements should be hosted close to your 5G cells, and 5G core features at points where it's convenient to concentrate private 5G traffic. Try to use the same kind of server technology, the same middleware, and the same software source for all of this, and be sure you get high-availability features. Rule three is that 5G user-plane functions associated with the RAN should be hosted on servers, located with the 5G RAN control-plane features. 


5 DevSecOps open source projects to know

Properly securing a software supply chain involves more than simply doing a point-in-time scan as part of a DevSecOps CI/CD pipeline. With the help of a working partnership that includes Google, the Linux Foundation, Red Hat, and Purdue University, sigstore brings together a set of tools developers, software maintainers, package managers, and security experts can benefit from. It handles the digital signing, verification, and logs data for transparent auditing, making it safer to distribute and use any signed software. The goal is to provide a free and transparent chain of custody tracing service for everyone. This sigstore service will run as a non-profit, public good service to provide software signing. Cosign, which released its 1.0 version in July 2021, signs and verifies artifacts stored in Open Container Initiative (OCI) registries. It also includes underlying specifications for storing and discovering signatures. Fulcio is a Root Certificate Authority (CA) for code-signing certificates. It issues certificates based on an Open ID Connect (OIDC) email address.The certificates that Fulcio issues to clients in order for them to sign an artifact are short-lived. 



Quote for the day:

"Leadership, on the other hand, is about creating change you believe in." -- Seth Godin

Daily Tech Digest - August 29, 2021

What is Terraform and Where Does It Fit in the DevOps Process?

Terraform is rapidly revolutionizing the entire landscape of DevOps and boosting the efficiency of DevOps projects. Terraform shares the same “Infrastructure as Code (IAC)” approach as most DevOps technologies and tools such as Ansible. However, Terraform operates in a distinct manner that is unique in itself as it focuses primarily on the automation of the entire infrastructure itself. This necessarily means that your complete Cloud infrastructure including networking, instances, and IPs can be easily defined in Terraform. There are some crucial differences between how Terraform operates and how other comparable technologies get the job done. Terraform provides support for all major cloud providers and doesn’t restrict the users to a specific platform like other tools. Terraform also handles provisioning failures in a much better way than other comparable tools. It achieves this by marking the suspect resources and ultimately removing and re-provisioning those resources in the next execution cycle. This approach improves the failure handling mechanism to a great extent since the system doesn’t have to re-build all the resources including the ones that were successfully provisioned.


Why Blockchain-Based Cloud Computing Could Be the Future of IoT

With the adoption of IoT in more devices, it is also possible that data security threats such as hacking and data breaching increase significantly. So, to protect the IoT trending technology against such issues, blockchain technology comes into the picture. Blockchain networks are known to be more secure, cryptic, and reliable in terms of securing and keeping data safe. Thus, blockchain technology is also expanding along with the IoT to keep it safe. Generally, IoT is crucial to provide users a centralized network of devices. For instance, this centralized network is important to control home appliances, security sensors, or network adapters. Now, the IoT controller sends and receives the data from these devices to enable the wireless connection system. Currently, brands such as Samsung are manufacturing smart home appliances like air conditioners that can be connected to a simple mobile application. Moreover, Google’s Home device is also capable of controlling multiple devices with the voice command only. 


EXCLUSIVE Microsoft warns thousands of cloud customers of exposed databases

The vulnerability is in Microsoft Azure's flagship Cosmos DB database. A research team at security company Wiz discovered it was able to access keys that control access to databases held by thousands of companies. Wiz Chief Technology Officer Ami Luttwak is a former chief technology officer at Microsoft's Cloud Security Group. Because Microsoft cannot change those keys by itself, it emailed the customers Thursday telling them to create new ones. Microsoft agreed to pay Wiz $40,000 for finding the flaw and reporting it, according to an email it sent to Wiz. "We fixed this issue immediately to keep our customers safe and protected. We thank the security researchers for working under coordinated vulnerability disclosure," Microsoft told Reuters. Microsoft's email to customers said there was no evidence the flaw had been exploited. "We have no indication that external entities outside the researcher (Wiz) had access to the primary read-write key," the email said. “This is the worst cloud vulnerability you can imagine. It is a long-lasting secret,” Luttwak told Reuters. 


Linux 5.14 set to boost future enterprise application security

One of the ways that Linux users have had to mitigate those vulnerabilities is by disabling hyper-threading on CPUs and therefore taking a performance hit. “More specifically, the feature helps to split trusted and untrusted tasks so that they don’t share a core, limiting the overall threat surface while keeping cloud-scale performance relatively unchanged,” McGrath explained. Another area of security innovation in Linux 5.14 is a feature that has been in development for over a year and a half that will help to protect system memory in a better way than before. Attacks against Linux and other operating systems often target memory as a primary attack surface to exploit. With the new kernel, there is a capability known as memfd_secret () that will enable an application running on a Linux system to create a memory range that is inaccessible to anyone else, including the kernel. “This means cryptographic keys, sensitive data and other secrets can be stored there to limit exposure to other users or system activities,” McGrath said. 


4 Reasons Why Every Data Scientist Should Study Organizational Psychology

As data scientists we need to understand the psychology of our data sets in order to work with data effectively. We also need to motivate ourselves and others so that everyone is doing what it takes to deliver results on time and under budget. You might be a team leader or an executive that lead a data science team. There are many data science roles that require someone to lead others. If you are a data scientist in this role, understanding the psychology of your data scientists is essential for success as a team leader and executive. Organizational psychologists study topics such as leadership styles, group dynamics, motivation, and conflict resolution — all of which are important for any data scientist looking to lead a team. Setting well defined goals that your direct reports understand and allowing them to take ownership of their work are examples of strong leadership. Thus having a deeper understanding these psychology based concepts and putting them to use for your daily work would result in much more productive and having a more fulfilling work experience for you and the team.


5 keys that define leaders in a storm

There is no one who is invulnerable, and we all have a point that, when touched, takes us to that state where the most sensitive fibers appear. As much as you see leaders who show themselves to be powerful and almost indestructible, I work permanently with those people who, in intimacy with themselves, are exactly the same as any other. To work on accepting vulnerability: self-awareness, knowing who you are and being encouraged to go deep into diving into your inner aspects are the best tools. By doing so, you will strengthen your confidence and you will also know how to allow yourself the moments where it is not necessary to force yourself to be someone you are not and simply swim in your emotions, without repressing or hiding them. Limiting tendencies are accepted behaviors that you have about your emotional world. They feed on restrictive beliefs that, by making them true in yourself, you assume them as natural and real. Limiting tendencies are made up of a range of triggers against which you act automatically, which manifest themselves in the form of reactions that always lead you down the same path.


Quantum computers could read all your encrypted data. This 'quantum-safe' VPN aims to stop that

In other words, encryption protocols as we know them are essentially a huge math problem for hackers to solve. With existing computers, cracking the equation is extremely difficult, which is why VPNs, for now, are still a secure solution. But quantum computers are expected to bring about huge amounts of extra computing power – and with that, the ability to hack any cryptography key in minutes. "A lot of secure communications rely on algorithms which have been very successful in offering secure cryptography keys for decades," Venkata Josyula, the director of technology at Verizon, tells ZDNet. "But there is enough research out there saying that these can be broken when there is a quantum computer available at a certain capacity. When that is available, you want to be protecting your entire VPN infrastructure." One approach that researchers are working on consists of developing algorithms that can generate keys that are too difficult to hack, even with a quantum computer. This area of research is known as post-quantum cryptography, and is particularly sought after by governments around the world.


Essential Skills Every Aspiring Cyber Security Professional Should Have

As a cybersecurity professional, your job will revolve around technology and its many applications, regardless of the position you’re going to fill. Therefore, a strong understanding of the systems, networks, and software you’re going to work with is crucial for landing a good job in the field. Cybersecurity is an extremely complex domain, with many sub-disciplines, which means it’s virtually impossible to be an expert in all areas. That’s why you should choose a specialization and strive to assimilate as much knowledge and experience as possible in your specific area of activity. Earning a certificate of specialization is a good starting point. It’s good to have a general knowledge of other areas of cybersecurity, but instead of becoming a jack of all trades, you should focus on your specific domain if you want to increase your chances of success. Cybersecurity is all about protecting the company or organization you work for against potential cyber threats. This implies identifying vulnerabilities, improving security policies and protocols, eliminating cybersecurity risks, minimizing damages after an attack and constantly coming up with new solutions to avoid similar issues from happening again.


The Surprising History of Distributed Ledger Technology

The concept of a distributed ledger can be traced back as far as the times of the Roman Empire. As is now, the problem was how to achieve consensus on the data in a decentralized, distributed, and trustless manner. This problem is described as the Byzantine Generals’ Problem. The Byzantine Generals’ problem describes a scenario where a general plans to launch an attack. However, since the army is very dispersed, he or she does not have centralized control. The only way to succeed is if the Byzantine army launches a planned and synchronized attack, where any miscommunication can cause the offence to fail. The only way that the generals can synchronize a strike is by sending messages via messengers, which leads to several failure scenarios where different actors in the system behave dishonestly. Bitcoin solved the Byzantine Generals’ Problem by providing a unified protocol, called proof of work. The Generals problem described the main obstacle to massive, distributed processing and is the foundation for distributed ledger technology, where everyone must work individually to maintain a synchronized and distributed ledger.


The trouble with tools - Overcoming the frustration of failed data governance technologies

To explain, inside many organizations that claim to focus on data governance, the process is reliant on tools that produce a CSV of objects with no insight about where violations might exist. For example, they struggle to tell the difference between Personal Information (PI) and Personal Identifiable Information (PII). While most PI data doesn’t identify a specific person and isn’t as relevant to identifying governance violations, discovery tools still present that information to users, adding huge complexity to their processes and forcing them to revert to a manual process to filter what’s needed from what isn’t. Instead, it’s critical that organizations are able to view, classify and correlate data wherever it is stored, and do so from a single platform - otherwise, they simply can’t add value to the governance process. In the ideal scenario, effective governance tools will enable organizations to correlate their governance processes across all data sources to show where PII is being held, for example. The outputs then become much more accurate, so in a scenario where there are 10 million findings, users know with precision which of them are PII.



Quote for the day:

"It is the responsibility of leadership to provide opportunity, and the responsibility of individuals to contribute." -- William Pollard

Daily Tech Digest - August 28, 2021

Why scrum has become irrelevant

The purpose of the retrospective is just that: to reflect. We look at what worked, what didn’t work, and what kinds of experiments we want to try.Unfortunately, what it boils down to is putting the same Post-its of “good teamwork” and “too many meetings” in the same swim lanes as “what went well,” “what went wrong,” and “what we will do better.” ... Scrum is often the enemy of productivity, and it makes even less sense in the remote, post-COVID world. The premise of scrum should not be that one cookie cutter fits every development team on the planet. A lot of teams are just doing things by rote and with zero evidence of their effectiveness. An ever-recurring nightmare of standups, sprint grooming, sprint planning and retros can only lead to staleness. Scrum does not promote new and fresh ways of working; instead, it champions repetition. Let good development teams self-organize to their context. Track what gets shipped to production, add the time it took (in days!) after the fact, and track that. Focus on reality and not some vaguely intelligible burndown chart. Automate all you can and have an ultra-smooth pipeline. Eradicate all waste. 


Your data, your choice

These days, people are much more aware of the importance of shredding paper copies of bills and financial statements, but they are perfectly comfortable handing over staggering amounts of personal data online. Most people freely give their email address and personal details, without a second thought for any potential misuse. And it’s not just the tech giants – the explosion of digital technologies means that companies and spin-off apps are hoovering up vast amounts of personal data. It’s common practice for businesses to seek to “control” your data and to gather personal data that they don’t need at the time on the premise that it might be valuable someday. The other side of the personal data conundrum is the data strategy and governance model that guides an individual business. At Nephos, we use our data expertise to help our clients solve complex data problems and create sustainable data governance practices. As ethical and transparent data management becomes increasingly important, younger consumers are making choices based on how well they trust you will handle and manage their data.


How Kafka Can Make Microservice Planet a Better Place

Originally Kafka was developed under the Apache license but later Confluent forked on it and delivered a robust version of it. Actually Confluent delivers the most complete distribution of Kafka with Confluent Platform. Confluent Platform improves Kafka with additional community and commercial features designed to enhance the streaming experience of both operators and developers in production, at a massive scale. You can find thousands of documents about learning Kafka. In this article, we want to focus on using it in the microservice architecture, and we need an important concept named Kafka Topic for that. ... The final topic that we should learn about is before starting our stream processing project is Kstream. KStream is an abstraction of a record stream of KeyValue pairs, i.e., each record is an independent entity/event. In the real world, Kafka Streams greatly simplifies the stream processing from topics. Built on top of Kafka client libraries, it provides data parallelism, distributed coordination, fault tolerance, and scalability. It deals with messages as an unbounded, continuous, and real-time flow of records.


What Happens When ‘If’ Turns to ‘When’ in Quantum Computing?

Quantum computers will not replace the traditional computers we all use now. Instead they will work hand-in-hand to solve computationally complex problems that classical computers can’t handle quickly enough by themselves. There are four principal computational problems for which hybrid machines will be able to accelerate solutions—building on essentially one truly “quantum advantaged” mathematical function. But these four problems lead to hundreds of business use cases that promise to unlock enormous value for end users in coming decades. ... Not only is this approach inefficient, it also lacks accuracy, especially in the face of high tail risk. And once options and derivatives become bank assets, the need for high-efficiency simulation only grows as the portfolio needs to be re-evaluated continuously to track the institution’s liquidity position and fresh risks. Today this is a time-consuming exercise that often takes 12 hours to run, sometimes much more. According to a former quantitative trader at BlackRock, “Brute force Monte Carlo simulations for economic spikes and disasters can take a whole month to run.” 


Can companies build on their digital surge?

If digital is the heart of the modern organization, then data is its lifeblood. Most companies are swimming in it. Average broadband consumption, for example, increased 47 percent in the first quarter of 2020 over the same quarter in the previous year. Used skillfully, data can generate insights that help build focused, personalized customer journeys, deepening the customer relationship. This is not news, of course. But during the pandemic, many leading companies have aggressively recalibrated their data posture to reflect the new realities of customer and worker behavior by including models for churn or attrition, workforce management, digital marketing, supply chain, and market analytics. One mining company created a global cash-flow tool that integrated and analyzed data from 20 different mines to strengthen its solvency during the crisis. ... While it’s been said often, it still bears repeating: technology solutions cannot work without changes to talent and how people work. Those companies getting value from tech pay as much attention to upgrading their operating models as they do to getting the best tech. 


Understanding Direct Domain Adaptation in Deep Learning

To fill the gap between Source data (train data) and Target data (Test data) a concept called domain adaptation is used. It is the ability to apply an algorithm that is trained on one or more source domains to a different target domain. It is a subcategory of transfer learning. In domain adaptation, the source and target data have the same feature space but from different distributions, while transfer learning includes cases where target feature space is different from source feature space. ... In unsupervised domain adaptation, learning data contains a set of labelled source examples, a set of unlabeled source examples and a set of unlabeled target examples. In semi-supervised domain adaptation along with unlabeled target examples there, we also take a small set of target labelled examples And in supervised approach, all the examples are supposed to be labelled one Well, a trained neural network generalizes well on when the target data is represented well as the source data, to accomplish this a researcher from King Abdullah University of Science and Technology, Saudi Arabia proposed an approach called ‘Direct Domain Adaptation’ (DDA).



AI: The Next Generation Anti-Corruption Technology

Artificial intelligence, according to Oxford Insights, is the “next step in anti-corruption,” partially because of its capacity to uncover patterns in datasets that are too vast for people to handle. Humans may focus on specifics and follow up on suspected abuse, fraud, or corruption by using AI to discover components of interest. Mexico is an example of a country where artificial intelligence alone may not be enough to win the war. ... As a result, the cost of connectivity has decreased significantly, and the government is currently preparing for its largest investment ever. By 2024, the objective is to have a 4G mobile connection available to more than 90% of the population. In a society moving toward digital state services, the affordable connection is critical. The next stage is for the country to establish an AI strategy. The next national AI strategy will include initiatives such as striving toward AI-based solutions to offer government services for less money or introducing AI-driven smart procurement. In brief, Mexico aspires to be one of the world’s first 10 countries to adopt a national AI policy.


Introduction to the Node.js reference architecture, Part 5: Building good containers

Why should you avoid using reserved (privileged) ports (1-1023)? Docker or Kubernetes will just map the port to something different anyway, right? The problem is that applications not running as root normally cannot bind to ports 1-1023, and while it might be possible to allow this when the container is started, you generally want to avoid it. In addition, the Node.js runtime has some limitations that mean if you add the privileges needed to run on those ports when starting the container, you can no longer do things like set additional certificates in the environment. Since the ports will be mapped anyway, there is no good reason to use a reserved (privileged) port. Avoiding them can save you trouble in the future. ... A common question is, "Why does container size matter?" The expectation is that with good layering and caching, the total size of a container won't end up being an issue. While that can often be true, environments like Kubernetes make it easy for containers to spin up and down and do so on different machines. Each time this happens on a new machine, you end up having to pull down all of the components. 


Now is the time to prepare for the quantum computing revolution

We've proven that it can happen already, so that is down the line. But it's in the five- to 10-year range that it's going to take until we have that hardware available. But that's where a lot of the promises for these exponentially faster algorithms. So, these are the algorithms that will use these fault-tolerant computers to basically look at all the options available in a combinatorial matrix. So, if you have something like Monte Carlo simulation, you can try significantly all the different variables that are possible and look at every possible combination and find the best optimal solution. So, that's really, practically impossible on today's classical computers. You have to choose what variables you're going to use and reduce things and take shortcuts. But with these fault-tolerant computers, for significantly many of the possible solutions in the solution space, we can look at all of the combinations. So, you can imagine almost an infinite amount or an exponential amount of variables that you can try out to see what your best solution is.


Ragnarok Ransomware Gang Bites the Dust, Releases Decryptor

The gang is the latest ransomware group to shutter operations, due in part to mounting pressures and crackdowns from international authorities that already have led some key players to cease their activity. In addition to Avaddon and SyNack, two heavy hitters in the game — REvil and DarkSide – also closed up shop recently. Other ransomware groups are feeling the pressure in other ways. An apparently vengeful affiliate of the Conti Gang recently leaked the playbook of the ransomware group after alleging that the notorious cybercriminal organization underpaid him for doing its dirty work. However, even as some ransomware groups are hanging it up, new threat groups that may or may not have spawned from the previous ranks of these organizations are sliding in to fill the gaps they left. Haron and BlackMatter are among those that have emerged recently with intent to use ransomware to target large organizations that can pay million-dollar ransoms to fill their pockets. Indeed, some think Ragnarok’s exit from the field also isn’t permanent, and that the group will resurface in a new incarnation at some point.



Quote for the day:

"Leadership is a matter of having people look at you and gain confidence, seeing how you react. If you're in control, they're in control." -- Tom Laundry