Showing posts with label data orchestration. Show all posts
Showing posts with label data orchestration. Show all posts

Daily Tech Digest - November 05, 2024

GenAI in healthcare: The state of affairs in India

Currently, the All-India Institute of Medical Sciences (AIIMS) Delhi is the only public healthcare institution exploring AI-driven solutions. AIIMS, in collaboration with the Ministry of Electronics & Information Technology and the Centre for Development of Advanced Computing (C-DAC) Pune, launched the iOncology.ai platform to support oncologists in making informed cancer treatment decisions. The platform uses deep learning models to detect early-stage ovarian cancer, and available data shows this has already improved patient outcomes while reducing healthcare costs. This is one of the few key AI-driven initiatives in India. Although AI adoption in the healthcare provider segment is relatively high at 68%, a large portion of deployments are still in the PoC phase. What could transform India’s healthcare with Generative AI? What could help bring care to those who need it most? ... India has tremendous potential in machine intelligence, especially as we develop our own Gen AI capabilities. In healthcare, however, the pace of progress is hindered by financial constraints and a shortage of specialists in the field. Concerns over data breaches and cybersecurity incidents also contribute to this aversion. 


OWASP Beefs Up GenAI Security Guidance Amid Growing Deepfakes

To help organizations develop stronger defenses against AI-based attacks, the Top 10 for LLM Applications & Generative AI group within the Open Worldwide Application Security Project (OWASP) released a trio of guidance documents for security organizations on Oct. 31. To its previously released AI cybersecurity and governance checklist, the group added a guide for preparing for deepfake events, a framework to create AI security centers of excellence, and a curated database on AI security solutions. ... The trajectory of deepfakes is quite easy to predict — even if they are not good enough to fool most people today, they will be in the future, says Eyal Benishti, founder and CEO of Ironscales. That means that human training will likely only go so far. AI videos are getting eerily realistic, and a fully digital twin of another person controlled in real time by an attacker — a true "sock puppet" — is likely not far behind. "Companies want to try and figure out how they get ready for deepfakes," he says. "The are realizing that this type of communication cannot be fully trusted moving forward, which ... will take people some time to realize and adjust." In the future, since the telltale artifacts will be gone, better defenses are necessary, Exabeam's Kirkwood says.


Open-source software: A first attempt at organization after CRA

The Cyber Resilience Act was a shock that awakened many people from their comfort zone: How dare the “technical” representatives of the European Union question the security of open-source software? The answer is very simple: because we never told them, and they assumed it was because no one was concerned about security. ... The CRA requires software with automatic updates to roll out security updates automatically by default, while allowing users to opt out.  Companies must conduct a cyber risk assessment before a product is released and throughout 10 years or its expected lifecycle, and must notify the EU cybersecurity agency ENISA of any incidents within 24 hours of becoming aware of them, as well as take measures to resolve them. In addition to that, software products must carry the CE marking to show that they meet a minimum level of cybersecurity checks. Open-source stewards will have to care about the security of their products but will not be asked to follow these rules. In exchange, they will have to improve the communication and sharing of best security practices, which are already in place, although they have not always been shared. So, the first action was to create a project to standardize them, for the entire open-source software industry.


10 ways hackers will use machine learning to launch attacks

Attackers aren’t just using machine-learning security tools to test if their messages can get past spam filters. They’re also using machine learning to create those emails in the first place, says Adam Malone, a former EY partner. “They’re advertising the sale of these services on criminal forums. They’re using them to generate better phishing emails. To generate fake personas to drive fraud campaigns.” These services are specifically being advertised as using machine learning, and it’s probably not just marketing. “The proof is in the pudding,” Malone says. “They’re definitely better.” ... Criminals are also using machine learning to get better at guessing passwords. “We’ve seen evidence of that based on the frequency and success rates of password guessing engines,” Malone says. Criminals are building better dictionaries to hack stolen hashes. They’re also using machine learning to identify security controls, “so they can make fewer attempts and guess better passwords and increase the chances that they’ll successfully gain access to a system.” ... The most frightening use of artificial intelligence are the deep fake tools that can generate video or audio that is hard to distinguish from the real human. “Being able to simulate someone’s voice or face is very useful against humans,” says Montenegro.


Breaking Free From the Dead Zone: Automating DevOps Shifts for Scalable Success

If ‘Shift Left’ is all about integrating processes closer to the source code, ‘Shift Right’ offers a complementary approach by tackling challenges that arise after deployment. Some decisions simply can’t be made early in the development process. For example, which cloud instances should you use? How many replicas of a service are necessary? What CPU and memory allocations are appropriate for specific workloads? These are classic ‘Shift Right’ concerns that have traditionally been managed through observability and system-generated recommendations. Consider this common scenario: when deploying a workload to Kubernetes, DevOps engineers often guess the memory and CPU requests, specifying these in YAML configuration files before anything is deployed. But without extensive testing, how can an engineer know the optimal settings? Most teams don’t have the resources to thoroughly test every workload, so they make educated guesses. Later, once the workload has been running in production and actual usage data is available, engineers revisit the configurations. They adjust settings to eliminate waste or boost performance, depending on what’s needed. It’s exhausting work and, let’s be honest, not much fun.


5 cloud market trends and how they will impact IT

“Capacity growth will be driven increasingly by the even larger scale of those newly opened data centers, with generative AI technology being a prime reason for that increased scale,” Synergy Research writes. Not surprisingly, the companies with the broadest data center footprint are Amazon, Microsoft, and Google, which account for 60% of all hyperscale data center capacity. And the announcements from the Big 3 are coming fast and furious. ... “In effect, industry cloud platforms turn a cloud platform into a business platform, enabling an existing technology innovation tool to also serve as a business innovation tool,” says Gartner analyst Gregor Petri. “They do so not as predefined, one-off, vertical SaaS solutions, but rather as modular, composable platforms supported by a catalog of industry-specific packaged business capabilities.” ... There are many reasons for cloud bills increasing, beyond simple price hikes. Linthicum says organizations that simply “lifted and shifted” legacy applications to the public cloud, rather than refactoring or rewriting them for cloud optimization, ended up with higher costs. Many organizations overprovisioned and neglected to track cloud resource utilization. On top of that, organizations are constantly expanding their cloud footprint.


The Modern Era of Data Orchestration: From Data Fragmentation to Collaboration

Data systems have always needed to make assumptions about file, memory, and table formats, but in most cases, they've been hidden deep within their implementations. A narrow API for interacting with a data warehouse or data service vendor makes for clean product design, but it does not maximize the choices available to end users. ... In a closed system, the data warehouse maintains its own table structure and query engine internally. This is a one-size-fits-all approach that makes it easy to get started but can be difficult to scale to new business requirements. Lock-in can be hard to avoid, especially when it comes to capabilities like governance and other services that access the data. Cloud providers offer seamless and efficient integrations within their ecosystems because their internal data format is consistent, but this may close the door on adopting better offerings outside that environment. Exporting to an external provider instead requires maintaining connectors purpose-built for the warehouse's proprietary APIs, and it can lead to data sprawl across systems. ... An open, deconstructed system standardizes its lowest-level details. This allows businesses to pick and choose the best vendor for a service while having the seamless experience that was previously only possible in a closed ecosystem.


New OAIC AI Guidance Sharpens Privacy Act Rules, Applies to All Organizations

The new AI guidance outlines five key takeaways that require attention, and though the term “guidance” is used some of these constitute expansions of application of existing rules. The first of these is that Privacy Act requirements for personal information apply to AI systems, both in terms of user input and what the system outputs. ... The second AI guidance takeaway stipulates that privacy policies must be updated to have “clear and transparent” information about public-facing AI use. The third takeaway notes that the generation of images of real people, whether it be due to a hallucination or intentional creation of something like a deepfake, are also covered by personal information privacy rules. The fourth AI guidance takeaway states that any personal information input into AI systems can only be used for the primary purpose for which it was collected, unless consent is collected for other uses or those secondary uses can be reasonably expected to be necessary. The fifth and final takeaway is perhaps a case of burying the lede; the OAIC simply suggests that organizations not collect personal information through AI systems at all due to the ” significant and complex privacy risks involved.”


DevOps Moves Beyond Automation to Tackle New Challenges

“The future of DevOps is DevSecOps,” Jonathan Singer, senior product marketing manager at Checkmarx, told The New Stack. “Developers need to consider high-performing code as secure code. Everything is code now, and if it’s not secure, it can’t be high-performing,” he added. Checkmarx is an application security vendor that allows enterprises to secure their applications from the first line of code to deployment in the cloud, Singer said. The DevOps perspective has to be the same as the application security perspective, he noted. Some people think of seeing the environment around the app, but Checkmarx thinks of seeing the code in the application and making sure it’s safe and secure when it’s deployed, he added. “It might look like the security teams are giving more responsibility to the dev teams, and therefore you need security people in the dev team,” Singer said Checkmarx is automating the heavy mental lifting by prioritizing and triaging scan results. With the amount of code, especially for large organizations, finding ten thousand vulnerabilities is fairly common, but they will have different levels of severity. If a vulnerability is not exploitable, you can knock it out of the results list. “Now we’re in the noise reduction game,” he said.


How Quantum Machine Learning Works

While quantum computing is not the most imminent trend data scientists need to worry about today, its effect on machine learning is likely to be transformative. “The really obvious advantage of quantum computing is the ability to deal with really enormous amounts of data that we can't really deal with any other way,” says Fitzsimons. “We've seen the power of conventional computers has doubled effectively every 18 months with Moore's Law. With quantum computing, the number of qubits is doubling about every eight to nine months. Every time you add a single qubit to a system, you double its computational capacity for machine learning problems and things like this, so the computational capacity of these systems is growing double exponentially.” ... Quantum-inspired software techniques can also be used to improve classical ML, such as tensor networks that can describe machine learning structures and improve computational bottlenecks to increase the efficiency of LLMs like ChatGPT. “It’s a different paradigm, entirely based on the rules of quantum mechanics. It’s a new way of processing information, and new operations are allowed that contradict common intuition from traditional data science,” says OrĂºs.



Quote for the day:

"I find that the harder I work, the more luck I seem to have." -- Thomas Jefferson

Daily Tech Digest - April 23, 2024

Confronting the ethical issues of human-like AI

Regrettably, innovation leaders confronting the thorny ethical questions currently have little in the way of authoritative guidance to turn to. Unlike other technical professions such as civil and electrical engineering, computer science and software engineering currently lack established codes of ethics backed by professional licensing requirements. There is no widely adopted certification or set of standards dictating the ethical use of pseudoanthropic AI techniques. However, by integrating ethical reflection into their development process from the outset and drawing on lessons learned by other fields, technologists working on human-like AI can help ensure these powerful tools remain consistent with our values. ... Tech leaders must recognize that developing human-like AI capabilities is practicing ethics by another means. When it comes to human-like AI, every design decision has moral implications that must be proactively evaluated. Something as seemingly innocuous as using a human-like avatar creates an ethical burden. The approach can no longer be reactive, tacking on ethical guidelines once public backlash hits. 


AI in Platform Engineering: Concerns Grow Alongside Advantages

AI algorithms can automatically analyze past usage patterns, real-time demands and resource availability to allocate resources like servers, storage and databases. AI-powered platforms can ensure reliable infrastructure, eliminating the need for manual configuration and provisioning and saving platform engineers valuable time and effort. Since these platforms have been trained on vast data models that enable them to understand individual developer needs and preferences, they can provide resources when necessary. As a result, they can be used to customize development environments and generate configurations with minimal manual effort. Organizations gather an increasing amount of data daily. As a result, businesses must handle and manage a large amount of data and personal information, ensuring it remains secure and protected. Now teams can reduce the risk of noncompliance and associated penalties by automating crucial processes like records management and ensuring that tasks are carried out in compliance with industry governance protocols and standards, a plus in high-regulated markets.


Secrets of business-driven IT orgs

“The bottom line is that in today’s era of rapid technological innovation, IT teams are critical partners that teams all across the organization must rely on in order to meet and exceed their business goals,” says Mindy Lieberman, CIO of tech firm MongoDB. “A truly business-driven IT team shouldn’t just be aligned with business strategy, it should have a seat at the leadership table, have a hand in directing business strategy, and be brought in on any major transformational initiatives from the get-go.” To get that hand in directing business strategy, Lieberman created a base with “the right people and the right roadmap, [as well as] the right processes and technology to ensure agility and transparency.” She views IT’s agenda and the business agenda as one and the same. She has modernized operations and internal application infrastructure to ensure her tech team can be responsive to business and customer-facing needs. “Being a business-driven IT team means aligning the tools, processes, technology, and success metrics across an organization to ensure that we are aligned on the outcomes we are looking for and the strategy to deliver those outcomes,” Lieberman says.


Simplifying Intelligent Automation Adoption for Businesses of All Sizes

To maximize success, it's essential to prioritize high-impact processes and streamline repetitive tasks for instant efficiency boosts. When selecting processes for automation, assess their digitization level, stability, and exceptions to gauge implementation feasibility. It's also a must to collaborate with automation service providers to tailor solutions and ensure seamless integration. There must be transparency in the communication of automation goals and benefits across the organization, addressing concerns and fostering open dialogue for unified commitment and understanding. While the potential of intelligent automation is undeniable, the journey toward its successful implementation is a collaborative effort. By understanding the unique challenges faced by businesses of various sizes and actively addressing them, we can unlock the immense potential of this technology. Aligning automation initiatives with strategic goals ensures that efforts contribute directly to the organization's success and growth. Engaging stakeholders early in the process and demonstrating the potential benefits of selected processes can lead to greater acceptance and collaboration. 


Preventing Cyber Attacks Outweighs Cure

"A belief, that it is ok to compromise security for perceived convenience, is counter intuitive. There are few things more inconvenient than having to rebuild a person's identity or try to run a hospital or airport without the systems on which we now depend. Governments must invest resources to roll out defence grade preventive mechanisms and build the cyber security infrastructures that underpin zero trust networks. Indeed, it is widely accepted that identity centric security is the bedrock to Zero Trust Architecture. "It is important to acknowledge the release of the Australian Government's Cyber Strategy, efforts to uplift critical infrastructure standards and progress coordinating a Country wide digital identity framework. I also welcome the ambitious target to embed a zero-trust culture across the Australian Public Service to become a global cyber leader by 2030. "It is also intended to achieve a consistency in cyber security standards across government, industry, and jurisdictions. I commend the Australian Government for taking the initial steps to strengthen legislation and mandate the reporting of incidents. 


Here's why RISC-V is so important

RISC-V is quietly enabling a divergence in custom hardware for domain-specific applications, by providing an easy (or at least easier) pathway for businesses and academics to build their own versions of hardware when off-the-shelf options aren't suitable. This works in tandem with the wide range of full open-source RISC-V implementations on the market. Businesses may be able to take an existing open-source implementation of RISC-V (effectively a design for a complete processor core, usually written in a dedicated language like Verilog/SystemVerilog) and make modifications to suit their specific use case. This can involve dropping aspects that aren't needed, and adding pre-bundled supporting elements to the core which may even be off-the-shelf elements. This means that where previously it wouldn't have been practical or affordable to build specific hardware for a feature, it's now more broadly possible. Companies small and large are already utilizing this technology. It's difficult to understand if companies are designing cores from the ground up or using pre-constructed designs, but custom RISC-V silicon has already made its way into the market.


Can Generative AI Help With Quicker Threat Detections?

ChatGPT can help to some extent with security operations. Microsoft Security Copilot can reduce the load on SOCs. Tools such as Dropzone - an autonomous SOC analysis platform - can look at a phishing alert and take responsive action, with no code, and you don't have to write any playbooks for this. It just analyzes [the threat] and takes the required action. That class of tool is where organizations are going to be able to scale. From a people standpoint, organizations are having trouble hiring or retaining SOC personnel. These tools are going to take a lot of that load off the people and allow them to focus on more important things. ... Organizations are crafting generative-AI-acceptable use policies. All their employees have to read and sign it. Some organizations are taking it a step further and trying to provide training, just as companies have an annual, basic cyber awareness course. When I ask vendors about training, they either make generative AI training part of cyber training or have separate training. People take the policy, they read it and then they have the training so they understand what's expected of it.


The Importance Of Proactive & Empathetic Leadership Amidst A Changing Talent

LandscapeEmpathetic leadership demonstrates the company’s mission and values in action. It starts with the authenticity of who you are as a leader and sharing your own vulnerability. In vulnerability, you are able to empathize to get on someone’s level. You can’t do that if you’re leading by force or in a top-down manner. Empathetic leaders see their employees as people with lives. They trust they will do their jobs. They acknowledge that sometimes people have a tough year or two. These two attributes can—and must—coexist! For leaders, a clear place to start is being proactive in how you develop and implement policies. It’s about creating policies for the company you want to be, not just where you are now. When I took on a leadership position at a startup, I became the first pregnant person at the company. There were no parental leave policies or maternal care benefits in place – these policies needed to be created retroactively around me. 


European police chiefs target E2EE in latest demand for ‘lawful access’

“Tech companies are putting a lot of the information on end-to-end encryption. We have no problem with encryption; I’ve got a responsibility to try and protect the public from cybercrime, too — so strong encryption is a good thing — but what we need is for the companies to still be able to pass us the information we need to keep the public safe.” Currently, as a result of being able to scan messages that aren’t encrypted, platforms are sending tens of millions of child safety-related reports a year to police forces around the world, Biggar said — adding a further claim that “on the back of that information, we typically safeguard 1,200 children a month and arrest 800 people.” The implication here is that those reports will dry up if Meta continues expanding its use of E2EE to Instagram. Pointing out that Meta-owned WhatsApp has had the gold standard encryption as its default for years, Robinson wondered if this wasn’t a case of the crime agency trying to close the stable door after the horse has bolted. He got no straight answer to that — just more head-scratching equivocation.


The dawn of intelligent and automated data orchestration

Moving the data from one vendor’s storage type to another, or to a different location or cloud, involves creating a new copy of both the file system metadata and the actual file essence. This proliferation of file copies and the complexity needed to initiate copy management across silos interrupts user access and inhibits IT modernization and consolidation use cases. This reality also impacts data protection, which may become fragmented across the silos. ... It also creates economic inefficiencies when multiple redundant copies of data are created, or when idle data gets stuck on expensive high-performance storage systems when it would be better managed elsewhere. What is needed is a way to provide users and applications with seamless multi-protocol access to all their data, which is often fragmented across multiple vendor storage silos, including across multiple sites and cloud providers. In addition to global user access, IT administrators need to be able to automate cross-platform data services for workflow management, data protection, tiering, etc., but do so without interrupting users or applications.



Quote for the day:

"The ability to summon positive emotions during periods of intense stress lies at the heart of effective leadership." -- Jim Loehr

Daily Tech Digest - July 06, 2020

Benefits of RPA: RPA Best Practices for successful digital transformation

A main benefit of RPA solutions is that they reduce human error while enabling employees to feel more human by engaging in conversations and assignments that are more complex but could also be more rewarding. For instance, instead of having a contact center associate enter information while also speaking with a customer, an RPA solution can automatically collect, upload, or sync data into with other systems for the associate to approve while focusing on forming an emotional connection with the customer. Another impact of RPA is it can facilitate and streamline employee onboarding and training. An RPA tool, for instance, can pre-populate forms with the new hire’s name, address, and other key data from the resume and job application form, saving the employee time. For training, RPA can conduct and capture data from training simulations, allowing a global organization to ensure all employees receive the same information in a customized and efficient manner. RPA is not for every department and it’s certainly not a panacea for retention and engagement problems. But by thinking carefully about the benefits that it offers to employees, RPA can transform workflows—making employees’ jobs less robotic and more rewarding.


Hey Alexa. Is This My Voice Or a Recording?

The idea is to quickly detect whether a command given to a device is live or is prerecorded. It's a tricky proposition given that a recorded voice has characteristics similar to a live one. "Such attacks are known as one of the easiest to perform as it simply involves recording a victim's voice," says Hyoungshick Kim, a visiting scientist to CSIRO. "This means that not only is it easy to get away with such an attack, it's also very difficult for a victim to work out what's happened." The impacts can range from using someone else's credit card details to make purchases, controlling connected devices such as smart appliances and accessing personal data such home addresses and financial data, he says. The voice-spoofing problem has been tackled by other research teams, which have come up with solutions. In 2017, 49 research teams submitted research for the ASVspoof 2017 Challenge, a project aimed at developing countermeasures for automatic speaker verification spoofing. The ASV competition produced one technology that had a low error rate compared to the others, but it was computationally expensive and complex, according to Void's research paper.


Reduce these forms of AI bias from devs and testers

Cognitive bias means that individuals think subjectively, rather than objectively, and therefore influence the design of the product they're creating. Humans filter information through their unique experience, knowledge and opinions. Development teams cannot eliminate cognitive bias in software, but they can manage it. Let's look at the biases that most frequently affect quality, and where they appear in the software development lifecycle. Use the suggested approaches to overcome cognitive biases, including AI bias, and limit their effect on software users. A person knowledgeable about a topic finds it difficult to discuss from a neutral perspective. The more the person knows, the harder neutrality becomes. That bias manifests within software development teams when experienced or exceptional team members believe that they have the best solution. Infuse the team with new members to offset some of the bias that occurs with subject matter experts. Cognitive bias often begins in backlog refinement. Preconceived notions about application design can affect team members' critical thinking. During sprint planning, teams can fall into the planning fallacy: underestimating the actual time necessary to complete a user story.


Deploying the Best of Both Worlds: Data Orchestration for Hybrid Cloud

A different approach to bridging the worlds of on-prem data centers and the growing variety of cloud computing services is offered by a company called Alluxio. From their roots at the Berkeley Amp Labs, they've been focused on solving this problem. Alluxio decided to bring the data to computing in a different way. Essentially, the technology provides an in-memory cache that nestles between cloud and on-prem environments. Think of it like a new spin on data virtualization, one that leverages an array of cloud-era advances. According to Alex Ma, director of solutions engineering at Alluxio: "We provide three key innovations around data: locality, accessibility and elasticity. This combination allows you to run hybrid cloud solutions where your data still lives in your data lake." The key, he said, is that "you can burst to the cloud for scalable analytics and machine-learning workloads where the applications have seamless access to the data and can use it as if it were local--all without having to manually orchestrate the movement or copying of that data."


Redis and open source succession planning

Speaking of the intersection of open source software development and cloud services, open source luminary Tim Bray has said, “The qualities that make people great at carving high-value software out of nothingness aren’t necessarily the ones that make them good at operations.” The same can be said of maintaining open source projects. Just because you’re an amazing software developer doesn’t mean you’ll be a great software maintainer, and vice versa. Perhaps more pertinently to the Sanfilippo example, developers may be good at both, yet not be interested in both. (By all accounts Sanfilippo has been a great maintainer, though he’s the first to say he could become a bottleneck because he liked to do much of the work himself rather than relying on others.) Sanfilippo has given open source communities a great example of how to think about “career” progression within these projects, but the same principle applies within enterprises. Some developers will thrive as managers (of people or of their code), but not all. As such, we need more companies to carve out non-management tracks for their best engineers, so developers can progress their career without leaving the code they love. 


How data science delivers value in a post-pandemic world

The uptick in the need for data science, across industries, comes with the need for data science teams. While hiring may have slowed down in the tech sector – Google slowed its hiring efforts during the pandemic – data scientists professionals are still in high demand. However, it’s important to keep a close eye on how these teams continue to evolve. One position which is increasingly in-demand as businesses become more data-driven is the role of the Algorithm Translator. This person is responsible for translating business problems into data problems and, once the data answer is found, articulating this back into an actionable solution for business leaders to apply. The Algorithm Translator must first break down the problem statement into use cases, connect these use cases with the appropriate data set, and understand any limitations on the data sources so the problem is ready to be solved with data analytics. Then, in order to translate the data answer into a business solution, the Algorithm Translator must stitch the insights from the individual use cases together to create a digestible data story that non-technical team members can put into action.


Open source contributions face friction over company IP

Why the change? Companies that have established open source programs say the most important factor is developer recruitment. "We want to have a good reputation in the open source world overall, because we're hiring technical talent," said Bloomberg's Fleming. "When developers consider working for us, we want other people in the community to say 'They've been really contributing a lot to our community the last couple years, and their patches are always really good and they provide great feedback -- that sounds like a great idea, go get a job there.'" While companies whose developers contribute code to open source produce that code on company time, the company also benefits from the labor of all the other organizations that contribute to the codebase. Making code public also forces engineers to adhere more strictly to best practices than if it were kept under wraps and helps novice developers get used to seeing clean code.


How Ekans Ransomware Targets Industrial Control Systems

The Ekans ransomware begins the attack by attempting to confirm its target. This is achieved by resolving the domain of the targeted organization and comparing this resolved domain to a specific list of IP addresses that have been preprogrammed, the researchers note. If the domain doesn't match the IP list, the ransomware aborts the attack. "If the domain/IP is not available, the routine exits," the researchers add. If the ransomware does find a match between the targeted domain and the list of approved IP addresses, Ekans then infects the domain controller on the network and runs commands to isolate the infected system by disabling the firewall, according to the report. The malware then identifies and kills running processes and deletes the shadow copies of files, which makes recovering them more difficult, Hunter and Gutierrez note. In the file stage of the attack, the malware uses RSA-based encryption to lock the target organization's data and files. It also displays a ransom note demanding an undisclosed amount in exchange for decrypting the files. If the victim fails to respond within first 48 hours, the attackers then threaten to publish their data, according to the Ekans ransom recovered by the FortiGuard researchers.


The best SSDs of 2020: Supersized 8TB SSDs are here, and they're amazing

If performance is paramount and price is no object, Intel’s Optane SSD 905P is the best SSD you can buy, full stop—though the 8TB Sabrent Rocket Q NVMe SSD discussed above is a strong contender if you need big capacities and big-time performance. Intel’s Optane drive doesn’t use traditional NAND technology like other SSDs; instead, it’s built around the futuristic 3D Xpoint technology developed by Micron and Intel. Hit that link if you want a tech deep-dive, but in practical terms, the Optane SSD 900P absolutely plows through our storage benchmarks and carries a ridiculous 8,750TBW (terabytes written) rating, compared to the roughly 200TBW offered by many NAND SSDs. If that holds true, this blazing-fast drive is basically immortal—and it looks damned good, too. But you pay for the privilege of bleeding edge performance. Intel’s Optane SSD 905P costs $600 for a 480GB version and $1,250 for a 1.5TB model, with several additional options available in both the U.2 and PCI-E add-in-card form factors. That’s significantly more expensive than even NVMe SSDs—and like those, the benefits of Intel’s SSD will be most obvious to people who move large amounts of data around regularly.


SRE: A Human Approach to Systems

Failure will happen, incidents will occur, and SLOs will be breached. These things may be difficult to face, but part of adopting SRE is to acknowledge that they are the norm. Systems are made by humans, and humans are imperfect. What’s important is learning from these failures and celebrating the opportunity to grow. One way to foster this culture is to prioritize psychological safety in the workplace. The power of safety is very obvious but often overlooked. Industry thought leaders like Gene Kim have been promoting the importance of feeling safe to fail. He addresses the issue of psychological insecurity in his novel, “The Unicorn Project.” Main character Maxine has been shunted from a highly-functional team to Project Phoenix, where mistakes are punishable by firing. Gene writes “She’s [Maxine] seen the corrosive effects that a culture of fear creates, where mistakes are routinely punished and scapegoats fired. Punishing failure and ‘shooting the messenger’ only cause people to hide their mistakes, and eventually, all desire to innovate is completely extinguished.”



Quote for the day:

"Education: the path from cocky ignorance to miserable uncertainty." -- Mark Twain