Daily Tech Digest - June 16, 2024

Human and AI Partnership Drives Manufacturing and Distribution Forward

Industry 5.0 offers a promising solution to the persistent challenge of labor shortages. By fostering a symbiotic dynamic between humans and robots, it lightens the resourcing burden. Human workers bring adaptability and problem-solving skills to the table, while robots contribute to speed and precision in task handling. This collaboration not only boosts job satisfaction and productivity but also promotes employee skill development and reduces overall errors. Moreover, for any hazardous tasks, Industry 5.0 assigns robots to handle physically demanding or risky duties, enhancing safety and minimizing human error in critical situations, thus creating a healthier work environment. It can also significantly enhance supply chain resilience, a critical concern on every manufacturer and distributor’s radar following the recent Red Sea crisis. Leveraging real-time data analytics and AI-driven insights assists human decision-making in predicting and mitigating disruptions. Advanced sensors and IoT devices continuously monitor supply chain activities, including early detection of potential issues such as transportation delays or inventory shortages. 


Beyond Traditional: Why Cybersecurity Needs Neurodiversity

Neurodiverse individuals often exhibit exceptional logical and methodical thinking, attention to detail, and cognitive pattern recognition skills. For example, they can hyperfocus on tasks, giving complete attention to specific issues for prolonged periods, which is invaluable in identifying and mitigating security threats. Their ability to engage deeply in their work ensures that even the smallest anomalies are detected and addressed swiftly. Moreover, many neurodiverse individuals thrive on repetitive tasks and routines, finding comfort and even excitement in long, monotonous processes. This makes them well-suited for roles that involve continuous monitoring and analysis of security data. Their high levels of concentration and persistence allow them to stay on task until solutions are found, ensuring thorough and effective problem-solving. Creativity is another significant benefit that neurodiverse individuals bring to cybersecurity. Their unique, nonlinear thinking enables them to approach problems from different angles and develop innovative solutions. This creativity is crucial for devising new methods to counteract evolving cyber threats. 


Missing Links: How to ID Supply Chain Risks

Current events seem to indicate that supply chain resilience is something companies need to master, sooner rather than later. To get there, they need real-time, end-to-end visibility into supply chain issues and the ability to proactively plan for various types of supply chain risks. “We have discussed the next best action for decades in our supply chains and operations, but realistically, we have never had the flexibility in our process and systems to enable that,” says Protiviti’s Petrucci. “As the world is adopting cloud and more cloud-native design and thinking it will enable us to move close to breaking away from the traditional systems and design more capable supply chain risk, execution, and next best action capabilities. We have started to enable our customers in moving in this direction.” ... The increasing risk of being tied to one region is now at the highest level ever, and I believe we’ll continue to see a shift in supplier sourcing strategies, with the pendulum swinging towards regional diversification,” says Fictiv’s Evans. “Regional optionality continues to be top of mind for supply chain leaders based on geopolitical uncertainties and the need to mitigate risk where possible. 


Human I/O: Detecting situational impairments with large language models

Situational impairments can vary greatly and change frequently, which makes it difficult to apply one-size-fits-all solutions that help users with their needs in real-time. For example, think about a typical morning routine: while brushing their teeth, someone might not be able to use voice commands with their smart devices. When washing their face, it could be hard to see and respond to important text messages. And while using a hairdryer, it might be difficult to hear any phone notifications. Even though various efforts have created solutions tailored for specific situations like these, creating manual solutions for every possible situation and combination of challenges isn't really feasible and doesn't work well on a large scale. ... Rather than devising individual models for activities like face-washing, tooth-brushing, or hair-drying, Human Input/Output (Human I/O) universally assesses the availability of a user’s vision (e.g., to read text messages, watch videos), hearing (e.g., to hear notifications, phone calls), vocal (e.g., to have a conversation, use Google Assistant), and hand (e.g., to use touch screen, gesture control) input/output interaction channels.


Do IDEs Make You Stupid?

An IDE can be an indispensable tool when used to help a developer think better. But when it’s used as a means of automation while removing the developer’s need to understand the underlying tasks of modern computer programming, an IDE can be a detriment. No doubt, an IDE provides a benefit by automating programming tasks that are tedious and repetitive, or even those tasks that require the programmer to do a lot of typing. Still, those commands are there for a reason, and a developer would do well to understand the details of what they’re about and why they need to be done. ... The “hiding the math” aspect of using an IDE might not matter to senior developers who have the experience and insight to understand the hidden details that an IDE has automated. However, for an entry-level developer, using an IDE without understanding what it’s doing behind the scenes can limit the developer’s ability to do the type of more advanced work that’s needed to progress in their career. Knowing the details is important. ... An IDE can improve cognitive ergonomics, but you must want it to. Passive interaction with the tool will get you only so far. 


How to streamline data center sustainability governance

Achieving sustainability goals requires an extensive understanding of energy systems – specifically how, where, and when power is used. Eaton’s Brightlayer Data Centers suite includes the industry’s first digital platform that natively integrates asset management, IT and operational technology (OT) device monitoring, IT automation, power quality metrics, and one-line diagrams into a single, configurable application. Leveraging decades of expertise in the data center industry (from low- and medium-voltage switchgear and transformers to uninterruptible power supplies, battery storage, and power distribution units) this platform consolidates information traditionally siloed in disparate applications. ... More effective data and reporting on sustainability will help future-proof compliance, uncover opportunities to reduce resource consumption, increase customer satisfaction, and differentiate businesses. This approach improves data center performance by applying digitalization to make assets work harder, smarter, and more sustainably.


Why we don't have 128-bit CPUs

You might think 128-bit isn’t viable because it’s difficult or impossible, but that’s not the case. Many components in modern processors, like memory buses and SIMD units, already utilize 128-bit or larger sizes for specific tasks. For instance, the AVX-512 instruction set allows for 512-bit wide data processing. These SIMD (Single Instruction, Multiple Data) instructions have evolved from 32-bit to 64-bit, 128-bit, 256-bit, and now 512-bit operands, demonstrating significant advancements in parallel processing capabilities. ... The only significant use cases for 128-bit integers are IPv6 addresses, universally unique identifiers (or UUID) that are used to create unique IDs for users (Minecraft is a high-profile use case for UUID), and file systems like ZFS. The thing is, 128-bit CPUs aren't necessary to handle these tasks, which have been able to exist just fine on 64-bit hardware. Ultimately, the key reason why we don't have 128-bit CPUs is that there's no demand for a 128-bit hardware-software ecosystem. The industry could certainly make it if it wanted to, but it simply doesn't.


A New Tactic in the Rapid Evolution of QR Code Scams

Because the QR code has ASCII characters behind it, security system may ignore it, thinking it’s a clean email. “Attack forms all evolve,” Fuchs wrote. “QR code phishing is no different. It’s unique, though, that the evolution has happened so rapidly. It started off with standard MFA verification codes. These were pretty straight forward, asking users to scan a code, either to re-set MFA or even look at financial data like an annual 401k contribution.” The next iteration – what Fuchs called QR Code Phishing 2.0 – involved conditional routing attacks, where the link adjusts to where the victim is interacting with it. If the target is using an Apple Mac system, one link appears. Another one will appear if the user is on a smartphone running Android. “We also saw custom QR Code campaigns, where hackers are dynamically populating the logo of the company and the correct username,” he wrote. This newest phase (“QR Code 3.0”) is more of a manipulation campaign, where it is using a text-based representation of a QR code rather than a traditional one. “It also represents how threat actors are responding to the landscape,” he wrote. 


'Sleepy Pickle' Exploit Subtly Poisons ML Models

Poisoning a model in this way carries a number of advantages to stealth. For one thing, it doesn't require local or remote access to a target's system, and no trace of malware is left to the disk. Because the poisoning occurs dynamically during deserialization, it resists static analysis. Serialized model files are hefty, so the malicious code necessary to cause damage might only represent a small fraction of the total file size. And these attacks can be customized in any number of ways that regular malware attacks are to prevent detection and analysis. While Sleepy Pickle can presumably be used to do any number of things to a target's machine, the researchers noted, "controls like sandboxing, isolation, privilege limitation, firewalls, and egress traffic control can prevent the payload from severely damaging the user’s system or stealing/tampering with the user’s data." More interestingly, attacks can be oriented to manipulate the model itself. For example, an attacker could insert a backdoor into the model, or manipulate its weights and, thereby, its outputs.


Digital Twins In Meetings? Not Any Time Soon

The benefits of having a digital twin are very interesting. To start, consider productivity, Bloomfilter founder and CEO Erik Severinghaus told Reworked. Your twin could manage everyday tasks and find problems before they become major headaches. However, there are many problems to solve first. The first thing to understand is how exactly these digital twins would copy us. He also raised the question of security, ensuring these AI versions of us cannot be used to create problems in our lives. Finally, while it is often overlooked, organizations need to keep ethical considerations in mind, Severinghaus continued. Are all employees OK with how their data and images get used by these digital twins? And what about future malicious use cases that no one has even imagined yet? ... While Yuan predicted the use of digital twins at an undetermined future date on the podcast, it clearly is still speculative. Let's just say you're safe from attending a meeting with a digital twin for now. However, given where we were with AI just 18 months ago, we suspect Yuan's vision becoming a reality might not be as far off in the future as you'd think.



Quote for the day:

"Be willing to make decisions. That's the most important quality in a good leader." -- General George S. Patton, Jr.

Daily Tech Digest - June 15, 2024

Does AI make us dependent on Big Tech?

The assumption is that banks would find it impractical to independently develop the extensive computing power required for AI technologies. Heavy reliance on a small number of tech providers, would pose a significant risk, particularly for European banks. It is further assumed that these banks need to retain the flexibility to switch between different technology vendors to prevent excessive dependence on any one provider, a situation also known as vendor lock-in. And now they want to get the governments involved. The U.K. has proposed new regulations to moderate financial firms’ reliance on external technology companies such as Microsoft, Google, IBM, Amazon, and others. Regulators are specifically concerned that issues at any single cloud computing company could disrupt services across numerous financial institutions. The proposed rules are part of larger efforts to protect the financial sector from systemic risks posed by such concentrated dependence on a few tech giants. In its first statement on AI, the European Union’s securities watchdog emphasized that banks and investment firms must not shirk boardroom responsibility when deploying AI technologies. 


How To Choose An Executive Coach? Remember The 5 C’s

A lot of people might put Congruence first, but if you don’t have Clarity the interpersonal dynamics are a moot point—it’s not just about liking your coach. Once you are clear on your goals and outcomes then you should seek a coach with whom you are willing to be psychologically vulnerable. You should test the potential coach to see if their style resonates with yours. For example, are they direct enough for you? Are they structured and organized, if you need that?  ... You should be looking for Credibility—that is, relevant knowledge and expertise. You’ll learn the most by asking questions to explore the coach’s experience and track record. Has the coach worked with other executives at your level? Do they have a frame of reference for your situation and what you are grappling with? Have they worked in a similar environment and successfully coached others with similar challenges? Do they understand the corporate world and the politics of your type of organization? One thing to keep in mind is that many executives today are not just looking for a coach to help them with finding their own solutions, but also for “coach-sulting”—which may include advice and counsel on leadership, strategy, organizational development, team building and tactical problem-solving.


New Research Suggests Architectural Technical Debt Is Most Damaging to Applications

“Architectural challenges and a lack of visibility into architecture throughout the software development lifecycle prevent businesses from reaching their full potential,” said Moti Rafalin, CEO and co-founder of vFunction, a company promoting AI-driven architectural observability and sponsor of the study. “Adding to this, the rapid accumulation of technical debt hampers engineering velocity, limits application scalability, impacts resiliency, and amplifies the risk of outages, delayed projects, and missed opportunities.” Monolithic architectures bear the brunt of the impact, with 57% of organizations allocating over a quarter of their IT budget to technical debt remediation, compared to 49% for microservices architectures. Companies with monolithic architectures are also 2.1 times more likely to face issues with engineering velocity, scalability, and resiliency. However, microservices architectures are not immune to technical debt challenges, with 53% of organizations experiencing delayed major technology migrations or platform upgrades due to productivity concerns.


Surge in Attacks Against Edge and Infrastructure Devices

Not just criminals but also state-sponsored attackers have been exploiting such devices, Google Cloud's Mandiant threat intelligence unit recently warned. One challenge for defenders: Many network edge devices function as "black boxes which are not easily examined or monitored by network administrators," and also lack antimalware or other endpoint detection and response capabilities, WithSecure's report says. "It is difficult for network administrators to verify they are secure, and they often must take it on trust. Certain types of these devices also provide edge services and so are internet-accessible." Many of these devices don't by default produce detailed logs that defenders can monitor using security incident and event management tools to watch for signs of attack. "These devices are supposed to secure our networks, but by itself, there's no way I can install an AV client on it, or an EDR client, or say, 'Hey, give me some fancy logs about what is happening on the device itself,'" said Christian Beek, senior director of threat analytics at Rapid7, in an interview at Infosecurity Europe 2024. 


Edge Devices: The New Frontier for Mass Exploitation Attacks

The attraction to edge devices comes from easier entry; and they provide easier and greater stealth once compromised. Since they often provide a continuous service, they are rarely switched off. Vendors design them for continuity, so purposely make them difficult or impossible for administrator control beyond predefined options. Indeed, any such individual activity can void warranties. They frequently do not produce logs of their activity that can be analyzed by SIEMs, and they cannot be monitored by standard security controls. In this sense they are similar to the OT demand for continuity — why fix something that ain’t broke? Until it is broke, by which time it is probably too late. The result is that edge devices and services often comprise software components that can be decades old involving operating systems that are well beyond end of life; and they are effectively cybersecurity’s forgotten man. Once inside, an attacker is hidden and can plan and execute the attack over time and out of sight. “Edge services are often internet accessible, unmonitored, and provide a rapid route to privileged local or network credentials on a server with broad access to the internal network,” says the report.


Quantum Computing and AI: A Perfect Match?

Quantum AI is already here, but it's a silent revolution, Orús says. "The first applications of quantum AI are finding commercial value, such as those related to LLMs, as well as in image recognition and prediction systems," he states. More quantum AI applications will become available as quantum computers grow more powerful. "It's expected that in two-to-three years there will be a broad range of industrial applications of quantum AI." Yet the road ahead may be rocky, Li warns. "It's well known that quantum hardware suffers from noise that can destroy computation," he says. "Quantum error correction promises a potential solution, but that technology isn't yet available." ... GenAI and quantum computing are mind-blowing advances in computing technology, says Guy Harrison, enterprise architect at cybersecurity technology company OneSpan, in a recent email interview. "AI is a sophisticated software layer that emulates the very capabilities of human intelligence, while quantum computing is assembling the very building blocks of the universe to create a computing substrate," he explains.


How to Offboard Departing IT Staff Members

Some terminations are not amicable, however, and those cases require immediate action. The IT department must implement an emergency revocation procedure that involves the instantaneous deactivation of all of the employee’s access credentials across all systems. Immediate action minimizes the risk of retaliatory actions or data breaches, which are heightened concerns in such scenarios. ... Departing employees often leave behind a trail of licenses and subscriptions for various software and online services used during their tenure. IT departments must undertake a thorough assessment of these digital assets to determine which licenses remain necessary, which can be reallocated and which should be terminated, based on current and anticipated needs. ... Hardware retrieval is an aspect of offboarding that requires at least as much diligence as digital access revocation — and often more, given the number of remote employees that many businesses have. All devices issued to employees — laptops, tablets, smartphones, ID cards and more — must be returned, thoroughly inspected and wiped of sensitive information before they are reassigned or decommissioned.


Integrating Transfer Learning and Data Augmentation for Enhanced Machine Learning Performance

Concretely, the first step consists of applying data augmentation techniques, including flipping, noise injection, rotation, cropping, and color space augmentation, to augment the volume of target domain data. Secondly, a transfer learning model, utilizing ResNet50 as the backbone, extracts transferable features from raw image data. The model’s loss function integrates cross-entropy loss for classification and a distance metric function between source and target domains. By minimizing this combined loss function, the model aims to simultaneously improve classification accuracy on the target domain while aligning the distributions of the source and target domains The experiments compared an enhanced transfer learning method with conventional ones across datasets like Office-31 and pneumonia X-rays. Different models, including DAN and DANN, were tested using various techniques like discrepancy-based and adversarial approaches. The enhanced method, incorporating data augmentation, consistently outperformed others, especially when source and target domains were more similar. 


OIN expands Linux patent protection yet again (but not to AI)

Keith Bergelt, OIN's CEO, emphasized the importance of this update, stating, "Linux and other open-source software projects continue to accelerate the pace of innovation across a growing number of industries. By design, periodic expansion of OIN's Linux System definition enables OIN to keep pace with OSS's growth." Bergelt explained that this update reflects OIN's well-established process of carefully maintaining a balance between stability and incorporating innovative core open-source technologies into the Linux System definition. The latest additions result from OIN's consensus-driven update process. "OIN is also trying to make patent protection more accessible," he added. "We're trying to make it easier for people to understand what's in there and why it's in there, what it relates to, what projects it relates to, and what it means to developers and laymen as well as lawyers." Looking ahead, Bergelt said, "We made this conscious decision not to include AI. It's so dynamic. We wait until we see what AI programs have significant usage and adoption levels." This is how the OIN has always worked. The consortium takes its time to ensure it extends its protection to projects that will be around for the long haul.


Beyond Sessions: Centering Users in Mobile App Observability

The main use case for tracking users explicitly in backend data is the potential to link them to your mobile data. This linkage provides additional attributes that can then be associated with the request that led to slow backend traces. For example, you can add context that may be too expensive to be tracked directly in the backend, like the specific payload blobs for the request, but that is easily collectible on the client. For mobile observability, tracking users explicitly is of paramount importance. In this space, platforms, and vendors recognize that modeling a user’s experience is essential because knowing the totality and sequencing of the activities around the time a user experiences performance problems is key for debugging. By grouping temporally related events for a user and presenting them in a chronologically sorted order, they have created what has become de rigueur in mobile observability: the user session. Presenting telemetry this way allows mobile developers to spot patterns and provide explanations as to why performance problems occur. 



Quote for the day:

“Every adversity, every failure, every heartache carries with it the seed of an equal or greater benefit.” -- Napoleon Hill

Daily Tech Digest - June 14, 2024

State Machine Thinking: A Blueprint For Reliable System Design

State machines are instrumental in defining recovery and failover mechanisms. By clearly delineating states and transitions, engineers can identify and code for scenarios where the system needs to recover from an error, failover to a backup system or restart safely. Each state can have defined recovery actions, and transitions can include logic for error handling and fallback procedures, ensuring that the system can return to a safe state after encountering an issue. My favorite phrase to advocate here is: “Even when there is no documentation, there is no scope for delusion.” ... Having neurodivergent team members can significantly enhance the process of state machine conceptualization. Neurodivergent individuals often bring unique perspectives and problem-solving approaches that are invaluable in identifying states and anticipating all possible state transitions. Their ability to think outside the box and foresee various "what-if" scenarios can make the brainstorming process more thorough and effective, leading to a more robust state machine design. This diversity in thought ensures that potential edge cases are considered early in the design phase, making the system more resilient to unexpected conditions.


How to Build a Data Stack That Actually Puts You in Charge of Your Data

Sketch a data stack architecture that delivers the capabilities you've deemed necessary for your business. Your goal here should be to determine what your ideal data stack looks like, including not just which types of tools it will include, but also which personnel and processes will leverage those tools. As you approach this, think in a tool-agnostic way. In other words, rather than looking at vendor solutions and building a stack based on what's available, think in terms of your needs. This is important because you shouldn't let tools define what your stack looks like. Instead, you should define your ideal stack first, and then select tools that allow you to build it. ... Another critical consideration when evaluating tools is how much expertise and effort are necessary to get tools to do what you need them to do. This is important because too often, vendors make promises about their tools' capabilities — but just because a tool can theoretically do something doesn't mean it's easy to do that thing with that tool. A data discovery tool that requires you to install special plugins or write custom code to work with a legacy storage system you depend on.


IT leaders go small for purpose-built AI

A small AI approach has worked for Dayforce, a human capital management software vendor, says David Lloyd, chief data and AI officer at the company. Dayforce uses AI and related technologies for several functions, with machine learning helping to match employees at client companies to career coaches. Dayforce also uses traditional machine learning to identify employees at client companies who may be thinking about leaving their jobs, so that the clients can intervene to keep them. Not only are smaller models easier to train, but they also give Dayforce a high level of control over the data they use, a critical need when dealing with employee information, Lloyd says. When looking at the risk of an employee quitting, for example, the machine learning tools developed by Dayforce look at factors such as the employee’s performance over time and the number of performance increases received. “When modeling that across your entire employee base, looking at the movement of employees, that doesn’t require generative AI, in fact, generative would fail miserably,” he says. “At that point you’re really looking at things like a recurrent neural network, where you’re looking at the history over time.”


Why businesses need ‘agility and foresight’ to stay ahead in tech

In the current IT landscape, one of the most pressing challenges is the evolving threat of cyberattacks, particularly those augmented by GenAI. As GenAI becomes more sophisticated, it introduces new complexities for cybersecurity with cybercriminals leveraging it to create advanced attack vectors. ... Several transformative technologies are reshaping our industry and the world at large. At the forefront of these innovations is GenAI. Over the past two years, GenAI has moved from theory to practice. While GenAI has fostered many creative ideas in 2023 of how it will transform business, GenAI projects are starting to become business-ready with visible productivity gains becoming evident. Transformative technology also holds a strong promise to have a profound impact on cybersecurity, offering advanced capabilities for threat detection and incident response from a cybersecurity standpoint. Organisations will need to use their own data for training and fine-tuning models, conducting inference where data originates. Although there has been much discussion about zero trust within our industry, we’re now seeing it evolve from a concept to a real technology. 


Who Should Run Tests? On the Future of QA

QA is a funny thing. It has meant everything from “the most senior engineer who puts the final stamp on all code” to “the guy who just sort of clicks around randomly and sees if anything breaks.” I’ve seen seen QA operating in all different levels of the organization, from engineers tightly integrated with each team to an independent, almost outside organization. A basic question as we look at shifting testing left, as we put more testing responsibility with the product teams, is what the role of QA should be in this new arrangement. This can be generalized as “who should own tests?” ... If we’re shifting testing left now, that doesn’t mean that developers will be running tests for the first time. Rather, shifting left means giving developers access to a complete set of highly accurate tests, and instead of just guessing from their understanding of API contracts and a few unit tests that their code is working, we want developers to be truly confident that they are handing off working code before deploying it to production. It’s a simple, self-evident principle that when QA finds a problem, that should be a surprise to the developers. 


Implementing passwordless in device-restricted environments

Implementing identity-based passwordless authentication in workstation-independent environments poses several unique challenges. First and foremost is the issue of interoperability and ensuring that authentication operates seamlessly across a diverse array of systems and workstations. This includes avoiding repetitive registration steps which lead to user friction and inconvenience. Another critical challenge, without the benefit of mobile devices for biometric authentication, is implementing phishing and credential theft-resistant authentication to protect against advanced threats. Cost and scalability also represent significant hurdles. Providing individual hardware tokens to each user is expensive in large-scale deployments and introduces productivity risks associated with forgotten, lost, damaged or shared security keys. Lastly, the need for user convenience and accessibility cannot be understated. Passwordless authentication must not only be secure and robust but also user-friendly and accessible to all employees, irrespective of their technical expertise. 


Modern fraud detection need not rely on PII

A fraud detection solution should also retain certain broad data about the original value, such as whether an email domain is free or corporate, whether a username contains numbers, whether a phone number is premium, etc. However, pseudo-anonymized data can still be re-identified, meaning if you know two people’s names you can tell if and how they have interacted. This means it is still too sensitive for machine learning (ML) since models can almost always be analyzed to regurgitate the values that go in. The way to deal with that is to change the relationships into features referencing patterns of behavior, e.g., the number of unique payees from an account in 24 hours, the number of usernames associated with a phone number or device, etc. These features can then be treated as fully anonymized, exported and used in model training. In fact, generally, these behavioral features are more predictive than the original values that went into them, leading to better protection as well as better privacy. Finally, a fraud detection system can make good use of third-party data that is already anonymized. 


Deepfakes: Coming soon to a company near you

Deepfake scams are already happening, but the size of the problem is difficult to estimate, says Jake Williams, a faculty member at IANS Research, a cybersecurity research and advisory firm. In some cases, the scams go unreported to save the victim’s reputation, and in other cases, victims of other types of scams may blame deepfakes as a convenient cover for their actions, he says. At the same time, any technological defenses against deepfakes will be cumbersome — imagine a deepfakes detection tool listening in on every phone call made by employees — and they may have a limited shelf life, with AI technologies rapidly advancing. “It’s hard to measure because we don’t have effective detection tools, nor will we,” says Williams, a former hacker at the US National Security Agency. “It’s going to be difficult for us to keep track of over time.” While some hackers may not yet have access to high-quality deepfake technology, faking voices or images on low-bandwidth video calls has become trivial, Williams adds. Unless your Zoom meeting is of HD or better quality, a face swap may be good enough to fool most people.


A Deep Dive Into the Economics and Tactics of Modern Ransomware Threat Actors

A common trend among threat actors is to rely on older techniques but allocate more resources and deploy them differently to achieve greater success. Several security solutions organizations have long relied on, such as multi-factor authentication, are now vulnerable to circumvention with very minimal effort. Specifically, organizations need to be aware of the forms of MFA factors they support, such as push notifications, pin codes, FIDO keys and legacy solutions like SMS text messages. The latter is particularly concerning because SMS messaging has long been considered an insecure form of authentication, managed by third-party cellular providers, thus lying outside the control of both employees and their organizations. In addition to these technical forms of breaches, the tried-and-true method of phishing is still viable. Both white hat and black hat tools continue to be enhanced to exploit common MFA replay techniques. Like other professional tools used by security testers like Cobalt Strike used by threat actors to maintain persistence on compromised systems, MFA bypass/replay tools have also gotten more professional. 


Troubleshooting Windows with Reliability Monitor

Reliability Monitor zeroes in on and tracks a limited set of errors and changes on Windows 10 and 11 desktops (and earlier versions going back to Windows Vista), offering immediate diagnostic information to administrators and power users trying to puzzle their way through crashes, failures, hiccups, and more. ... There are many ways to get to Reliability Monitor in Windows 10 and 11. At the Windows search box, if you type reli you’ll usually see an entry that reads View reliability history pop up on the Start menu in response. Click that to open the Reliability Monitor application window. ... Knowing the source of failures can help you take action to prevent them. For example, certain critical events show APPCRASH as the Problem Event Name. This signals that some Windows app or application has experienced a failure sufficient to make it shut itself down. Such events are typically internal to an app, often requiring a fix from its developer. Thus, if I see a Microsoft Store app that I seldom or never use throwing crashes, I’ll uninstall that app so it won’t crash any more. This keeps the Reliability Index up at no functional cost.



Quote for the day:

"Success is a state of mind. If you want success start thinking of yourself as a sucess." -- Joyce Brothers

Daily Tech Digest - June 13, 2024

Backup lessons learned from 10 major cloud outages

So, what’s the most critical lesson here? Back up your cloud data! And I don’t just mean relying on your provider’s built-in backup services. As we saw with Carbonite, StorageCraft and OVH, those backups can evaporate along with your primary data if disaster strikes. You need to follow the 3-2-1 rule religiously: keep at least three copies of your data, on two different media, with one copy off-site. And in the context of the cloud, “different media” means not storing everything in the same type of system; use different failure domains. Also, “off-site” means in a completely separate cloud account or, even better, with a third-party backup provider. But it’s not just about having backups; it’s about having the right kind of backups. Take the StorageCraft incident, for example. They lost customer backup metadata during a botched cloud migration, rendering those backups useless. This hammers home the importance of not only backing up your primary data but also maintaining the integrity and recoverability of your backup data itself.


4 Ways to Control Cloud Costs in the Age of Generative AI

First and foremost, prioritize building a cost-conscious culture within your organization. IT professionals are presented with some serious challenges to get spending under control and identify value where they can. Educating teams on cloud cost management strategies and fostering accountability can empower them to make informed decisions that align with business objectives. Organizations are increasingly implementing FinOps frameworks and strategies in their cloud cost optimization efforts as well. This promotes a shared responsibility for cloud costs across IT teams, DevOps, and other cross-functional teams. ... Implementing robust monitoring and optimization tools is essential. By leveraging analytics and automation, your organization can gain real-time insights into cloud usage patterns and identify opportunities for optimization. Whether it's rightsizing resources, implementing cost allocation tags, or leveraging spot instances, proactive optimization measures can yield substantial cost savings without sacrificing performance.


Gen AI can be the answer to your data problems — but not all of them

One use case is particularly well suited for gen AI because it was specifically designed to generate new text. “They’re very powerful for generating synthetic data and test data,” says Noah Johnson, co-founder and CTO at Dasera, a data security firm. “They’re very effective on that. You give them the structure and the general context, and they can generate very realistic-looking synthetic data.” The synthetic data is then used to test the company’s software, he says. ... The most important thing to know is that gen AI won’t solve all of a company’s data problems. “It’s not a silver bullet,” says Daniel Avancini, chief data officer at Indicium, an AI and data consultancy. If a company is just starting on its data journey, getting the basics right is key, including building good data platforms, setting up data governance processes, and using efficient and robust traditional approaches to identifying, classifying, and cleaning data. “Gen AI is definitely something that’s going to help, but there are a lot of traditional best practices that need to be implemented first,” he says. 


Scores of Biometrics Bugs Emerge, Highlighting Authentication Risks

Biometrics generally are regarded as a step above typical authentication mechanisms — that extra James Bond-level of security necessary for the most sensitive devices and the most serious environments. ... The critical nature of the environments in which these systems are so often deployed necessitates that organizations go above and beyond to ensure their integrity. And that job takes much more than just patching newly discovered vulnerabilities. "First, isolate a biometric reader on a separate network segment to limit potential attack vectors," Kiguradze recommends. Then, "implement robust administrator passwords and replace any default credentials. In general, it is advisable to conduct thorough audits of the device’s security settings and change any default configurations, as they are usually easier to exploit in a cyberattack." "There have been recent security breaches — you've probably read about them," acknowledges Rohan Ramesh, director of product marketing at Entrust. But in general, he says, there are ways to protect databases with hardware security modules and other advanced encryption technologies.


Mastering the tabletop: 3 cyberattack scenarios to prime your response

The ransomware CTEP explores aspects of an organization’s operational resiliency and poses key questions aimed at understanding threats to an organization, what information the attacker leverages, and how to conduct risk assessments to identify specific threats and vulnerabilities to critical assets. Given that ransomware attacks focus on data and systems, the scenario asks key questions about the accuracy of inventories and whether there are resources in place dedicated to mitigating known exploited vulnerabilities on internet-facing systems. This includes activities such as not just having backups, but their retention period and an understanding of how long it would take to restore from backups if necessary, in events such as a ransomware attack. Questions asked during the tabletop also include a focus on assessing zero-trust architecture implementation or lack thereof. This is critical, given that zero trust emphasizes least-permissive access control and network segmentation, practices that can limit the lateral movement of an attack and potentially keep it from accessing sensitive data, files, and systems.


10 Years of Kubernetes: Past, Present, and Future

There is little risk (nor reason) that Wasm will in some way displace containers. WebAssembly’s virtues — fast startup time, small binary sizes, and fast execution — lend strongly toward serverless workloads where there is no long-running server process. But none of these things makes WebAssembly an obviously better technology for long-running server process that are typically encapsulated in containers. In fact, the opposite is true: Right now, few servers can be compiled to WebAssembly without substantial changes to the code. When it comes to serverless functions, though, WebAssembly’s sub-millisecond cold start, near-native execution speed, and beefy security sandbox make it an ideal compute layer. If WebAssembly will not displace containers, then our design goal should be to complement containers. And running WebAssembly inside of Kubernetes should involve the deepest possible integration with existing Kubernetes features. That’s where SpinKube comes in. Packaging a group of open source tools created by Microsoft, Fermyon, Liquid Reply, SUSE, and others, SpinKube plumbs WebAssembly support directly into Kubernetes. A WebAssembly application can use secrets, config maps, volume mounts, services, sidecars, meshes, and so on. 


Cultivating a High Performance Environment

At the organizational level, how is a culture that supports high performers put in place and how does it remain in place? The simple answer is that cultural leaders must set the foundation. A great example is Gary Vaynerchuk. As CEO of his organization, he embodies many high performing qualities we’ve identified as power skills. He is the primary champion (Sponsor) for this culture, hires leaders (resources) who make up a group of champions, and these leaders hire others (teams) who expand the group of champions. Tools, tactics, and processes are put in place by all champions at all levels to support, build, and maintain the culture. Those who don’t resonate with high performance are supported as best and as long as possible. If they decide not to support the culture, they are facilitated to leave in a supportive manner. As organizations change and embrace true high performance (power skills), authentic high performers will proliferate. Organizations don’t really have a choice about whether to move to the new paradigm. This is the way now and of the future. Steve Jobs said it well: “We don’t hire experts to tell them what to do. We hire experts to tell us what to do.” 


Top 10 Use Cases for Blockchain

Smart contracts on the blockchain can also automate derivate contract execution based on pre-defined rules while automating dividend payments. Perhaps most notable, is its ability to tokenise traditional assets such as stocks and bonds into digital securities – paving the way for fractional ownership. ... Blockchain can also power CBDCs – a digital form of central bank money that offers unique advantages for central banks at retail and wholesale levels, from enhanced financial access for individuals to greater infrastructural efficiency for intermediate settlements. With distributed ledger transactions (DLT), CBDCs can be issued, recorded and validated in a decentralised way. ... Blockchain technology is becoming vital in the cybersecurity space too. When it comes to digital identities, blockchain enables the concept of self-sovereign identity (SSI), where individuals have complete control and ownership over their digital identities and personal data. Rather than relying on centralised authorities like companies or governments to issue and manage identities, blockchain enables users to create and manage their own.


Encryption as a Cloud-to-Cloud Network Security Strategy

Like upper management, there are network analysts and IT leaders who resist using data encryption. They view encryption as overkill—in technology and in the budget. Second, they may not have much first-hand experience with data encryption. Encryption uses black-box arithmetic algorithms that few IT professionals understand or care about. Next, if you opt to use encryption, you have to make the right choice out of many different types of encryption options. In some cases, an industry regulation may dictate the choice of encryption, which simplifies the choice. This can actually be a benefit on the budget side because you don't have to fight for new budget dollars when the driver is regulatory compliance. However, even if you don't have a regulatory requirement for the encryption of data in transit, security risks are growing if you run without it. Unencrypted data in transit can be intercepted by malicious actors for purposes of identity theft, intellectual property theft, data tampering, and ransomware. The more companies move into a hybrid computing environment that operates on-premises and in multiple clouds, the greater their risk since more data that is potentially unprotected is moving from point to point over this extended outside network.


Automated Testing in DevOps: Integrating Testing into Continuous Delivery

Automated testing skilfully diverts ownership responsibilities to the engineering team. They can prepare test plans or assist with the procedure alongside regular roadmap feature development and then complete the execution using continuous integration tools. With the help of an efficient automation testing company, you can reduce the QA team size and let quality analysts focus more on vital and sensitive features. ... The major goal of continuous delivery is to deliver new code releases to customers as fast as possible. Suppose there is any manual or time-consuming step within the delivery process. In that case, automating delivery to users becomes challenging rather than impossible. Continuous development can be an effective part of a greater deployment pipeline. It is a successor to and also relies on continuous integration. Continuous integration is entirely responsible for running automated tests against new code changes and verifying whether new changes are breaking new features or introducing new bugs. Continuous delivery takes place once the CI step passes the automated test plan.



Quote for the day:

"If you really want the key to success, start by doing the opposite of what everyone else is doing." -- Brad Szollose

Daily Tech Digest - June 11, 2024

4 reasons existing anti-bot solutions fail to protect mobile APIs

Existing anti-bot solutions attempt to bend their products to address mobile-based threats. For example, some require the implementation of an SDK into the mobile app, because that’s the only way the mobile app can respond to the main methods used by WAFs to identify bots from humans. Such solutions also typically require separate servers to be deployed behind the WAF, which are used to evaluate connection requests to discern legitimate connections from malicious ones. These “workarounds” impose single points of failure, performance bottlenecks, and latency, and often come with unacceptable capacity limitations. On top of that, WAF mobile SDKs also have limitations in terms of the dev framework support and can require developers to rewrite the network stack to achieve compatibility with the WAF. Such workarounds create more work and more costs. To make matters worse, because most anti-bot solutions on the market are not sufficiently hardened to protect against clones, spoofing, malware, or tampering, hackers can easily compromise, bypass, or disable the anti-bot solution if it’s implemented inside a mobile app that is not sufficiently protected against reverse engineering and other attacks.


Advancing interoperability in Africa: Overcoming challenges for digital integration

From a legal perspective, Mihret Woodmatas, senior ICT expert, department of infrastructure and energy, African Union Commission (AUC), points out that differing levels of development across countries pose a challenge. A significant issue is the lack of robust legal frameworks for data protection and privacy. ... Hopkins underscores the importance of sharing data to benefit those it is collected for, particularly refugees. While sharing data comes with risks, particularly concerning security and privacy, these can be managed with proper risk treatments. The goal is to avoid siloed data systems and instead foster coordination and cooperation among different entities. Hopkins discussed the digital transformation across states and international agencies, emphasizing the need for effective data sharing. Good data sharing practices enable various entities to provide coordinated services, significantly benefiting refugees by facilitating their access to education, healthcare, and employment. Interoperability also supports local communities economically and ensures a unique and continuous identity for refugees, even if they remain displaced for years or decades. 


Cloud migration expands the CISO role yet again

CISOs must now ensure they can report to the SEC within four business days of determining an incident’s materiality, describing its nature, scope, and potential impact. They must also communicate risk management strategies and incident response plans to ensure the board is well-informed about the organization’s cybersecurity posture. These changes require a more structured and proactive approach because CISOs must now be aware of compliance status in near real-time, not only to provide all cybersecurity incident data and context to the board, compliance teams, and finance teams, but to ensure they can determine quickly whether an incident has a material impact and therefore must be reported to the SEC. CISOs who miss making a timely disclosure or have the wrong security and compliance strategy in place can expect to be fined, even if the incident doesn’t turn into a catastrophic cybersecurity event. Boards must be able to trust that CISOs can answer any question related to compliance and security quickly and accurately, and the board themselves must be familiar with cybersecurity concepts, able to understand the risks and ask the right questions.


Generative AI Is Not Going To Build Your Engineering Team For You

People act like writing code is the hard part of software. It is not. It never has been, it never will be. Writing code is the easiest part of software engineering, and it’s getting easier by the day. The hard parts are what you do with that code—operating it, understanding it, extending it, and governing it over its entire lifecycle. A junior engineer begins by learning how to write and debug lines, functions, and snippets of code. As you practice and progress towards being a senior engineer, you learn to compose systems out of software, and guide systems through waves of change and transformation. Sociotechnical systems consist of software, tools, and people; understanding them requires familiarity with the interplay between software, users, production, infrastructure, and continuous changes over time. These systems are fantastically complex and subject to chaos, nondeterminism and emergent behaviors. If anyone claims to understand the system they are developing and operating, the system is either exceptionally small or (more likely) they don’t know enough to know what they don’t know. Code is easy, in other words, but systems are hard.


Is Oracle Finally Killing MySQL?

Things have changed, though, in recent years with the introduction of “MySQL Heatwave”—Oracle’s MySQL Cloud Database. Heatwave includes a number of features that are not available in MySQL Community or MySQL Enterprise, such as acceleration of analytical queries or ML functionality. When it comes to “analytical queries,” it is particularly problematic as MySQL does not even have parallel query execution. At a time when CPUs with hundreds of cores are coming to market, those cores are not getting significantly faster, which is increasingly limiting performance. This does not just apply to queries coming from analytical applications but also simple “group by” queries common in operational applications. Note: MySQL 8 does have some parallelization support for DDLs but not for queries. Could this have something to do with giving people more reason to embrace MySQL Heatwave? Or, rather move to PostgreSQL or adopt Clickhouse? Vector Search is another area where open source MySQL lacks. While every other major open source database has added support for Vector Search functionality, and MariaDB is working on it, having it as a cloud-only MySQL Heatwave Feature in the MySQL ecosystem is unfortunate, to say the least.


Giant legacies

Thought leadership in general demands we stand on the shoulders of innovators who have gone before. Thinking in HR is no exception. The essence of this debt was captured in the Hippocratic Oath this column had proposed for HR professionals: "I shall not forget the debt and respect I owe to those who have taught me and freely pass on the best of my learnings to those who work with me as well as through professional bodies, educational institutes or other means of dissemination. ... Thinking brilliant new concepts or applying those that have taken root in one field to another is necessary but not sufficient for creating a LOG. There are two other tests. If the concept, strategy or process proves its worth, it should be lasting. It need not become an unchangeable sacrament but further developments should emanate from it rather than demand a reversal of the flow. While we can sympathize with radical ideas (or greedy cats) that are brought to a dead end by 'malignant fate', we cannot honour them as LOGs. Apart from durability over time, we have transmission across organisational boundaries which establishes the generalizability of the innovation. 


Solving the data quality problem in generative AI

One of the biggest misconceptions surrounding synthetic data is model collapse. However, model collapse stems from research that isn’t really about synthetic data at all. It is about feedback loops in AI and machine learning systems, and the need for better data governance. For instance, the main issue raised in the paper The Curse of Recursion: Training on Generated Data Makes Models Forget is that future generations of large language models may be defective due to training data that contains data created by older generations of LLMs. The most important takeaway from this research is that to remain performant and sustainable, models need a steady flow of high-quality, task-specific training data. For most high-value AI applications, this means fresh, real-time data that is grounded in the reality these models must operate in. Because this often includes sensitive data, it also requires infrastructure to anonymize, generate, and evaluate vast amounts of data—with humans involved in the feedback loop. Without the ability to leverage sensitive data in a secure, timely, and ongoing manner, AI developers will continue to struggle with model hallucinations and model collapse.


DevSecOps Made Simple: 6 Strategies

Collective Responsibility describes the common practices shared by organizations that have taken a program-level approach to security culture development. Broken into three key areas: 1) executive support and engagement, 2) program design and implementation, 3) program sustainment and measurement, the paper suggests how to best garner (and keep) executive support and engagement while building an inclusive cultural program based on cumulative experience. ... Collaboration and Integration addresses the importance of integrating DevSecOps into organizational processes and stresses the key role that fostering a sense of collaboration plays in its successful implementation. ... Pragmatic Implementation outlines the practices, processes, and technologies that organizations should consider when building out any DevSecOps program and how to implement DevSecOps pragmatically. ... Bridging Compliance and Development is broken into three parts offering 1) an approach to compartmentalization and assessment with an eye to minimizing operating impact, 2) best practices on how compliance can be designed and implemented into applications, and 3) a look at the different security tooling practices that can provide assurance to compliance requirements.


Change Management Skills for Data Leaders

Strategic planning and decision-making are pivotal aspects of successful organizational transformation, requiring nuanced change management skills. Developing a strategy for organizational change in Data Management is a critical task that requires an understanding of both the current state of affairs and the desired future state. For data leaders, this involves conducting a thorough assessment to identify gaps between these two states. ... Developing effective communication and collaboration strategies is paramount in navigating the complexities of change management. A key component of this process involves crafting clear, concise, and transparent messaging that resonates with all stakeholders involved. This ensures that everyone, from team members to top-level management, understands not only the nature of the change but also its purpose and the benefits it promises to bring. ... Resilience is not just about enduring change but also about emerging stronger from it. Data leaders are often at the forefront of navigating through uncharted territories, be it technological advancements or market shifts, which requires an inherent ability to withstand pressure and bounce back from setbacks. 


Sanity Testing vs. Regression Testing: Key Differences

Sanity testing is the process that evaluates the specific software application functionality after its deployment with added new features or modifications and bug fixes. In simple terms, it is the quick testing to check whether the changes made are as per the Software Requirement Specifications (SRS). It is generally performed after the minor code adjustment to ensure seamless integration with existing functionalities. If the sanity test fails, it's a red flag that something's wrong, and the software might not be ready for further testing. This helps catch problems early on, saving time and effort down the road. ... Regression testing is the process of re-running tests on existing software applications to verify that new changes or additions haven't broken anything. It's a crucial step performed after every code alteration, big or small, to catch regressions – the re-emergence of old bugs due to new changes. By re-executing testing scenarios that were originally scripted when known issues were initially resolved, you can ensure that any recent alterations to an application haven't resulted in regression or compromised previously functioning components.



Quote for the day:

"The two most important days in your life are the day you are born and the day you find out why." --Mark Twain