Daily Tech Digest - August 21, 2024

Use the AI S-curve to drive meaningful technological change

The S-curve is a graphical representation of how technology matures over time. It starts slowly, with early adopters, specialized use cases, and technocrats. As the technology proves its value, it enters a phase of rapid growth where adoption accelerates and becomes more widely integrated into various industries and applications. However, as technology advances, becoming cheaper, faster, and more efficient, it inevitably reaches some logical limit and settles into a natural “top” of the S-curve. When a technology reaches its limit, progress is relatively slow, typically requiring significant increases in complexity. ... As new technologies like AI emerge and mature, organizations must balance the need to stay competitive with the potential risks and uncertainties associated with early adoption. This challenge is not new. In his book, The Innovator’s Dilemma, Clayton Christensen describes the difficult choice companies face between maintaining their existing, profitable business models and investing in new, potentially disruptive technologies. So, how can organizations navigate this decision? One approach is to ensure that there is a dedicated unit that operates on a long takt time, outside the quarterly or annual reporting pressure. 


How to Present the Case for a Larger IT Budget

Outright rejection of a budget expansion request is unlikely, but not impossible. "The important thing is not to take rejection personally -- it's the case that's rejected, not the person who presented it," Biswas says. It's also important to understand that the rejection is not necessarily wrong. "Sometimes, we get too close to our ideas to evaluate them impartially," he explains. Understand, too, that a rejection isn't necessarily forever, since issues that prevented approval can be addressed and resolved to present a more convincing case. It's important to fully understand stakeholders' individual interests as well as their tactical and strategic goals, Hachmann advises. "This approach requires a strong understanding of the respective priorities of each person involved in the budget approval processes." he says. "With this [tactic], you'll be better equipped to align IT initiatives and their costs with the stakeholders' business strategy." IT leaders often make the mistake of generalizing on how a bigger budget will improve IT instead of communicating the ways it will help the business. "IT leaders should be careful they're not perceived as 'empire builders' instead of business leaders who want what's best for the larger organization," Biswas says. 


Beyond Orchestration: A Comprehensive Approach to IaC Strategy

In large organizations, enforcing a single IaC tool across all departments is often impractical. Today, there are a diversity of tools that cater to different stacks, strengths and collaboration with developers — from those that are native to a specific platform (CloudFormation or ARM for Azure), and those for multicloud or cloud native, from Terraform and OpenTofu, to Helm and Crossplane, and those that cater to developers like Pulumi or AWS Cloud Development Kit (CDK). Different teams may prefer different tools based on their expertise, use cases or specific project requirements. A robust IaC strategy must account for:The coexistence of multiple IaC tools within the organization. Visibility across various IaC implementations. Governance and compliance across diverse IaC ecosystems. Ignoring this multi-IaC reality can lead to silos, reduced visibility and governance challenges. ... As DevOps and platform engineers, we’ve developed a platform that we ourselves have needed over many years of managing cloud fleets at scale. A platform that addresses not just tooling and orchestration, but all aspects of a comprehensive IaC strategy can be the difference between 2 a.m. downtime and a good night’s sleep.


DevSecOps Needs Are Shifting in the Cloud-Native Era

A cornerstone activity for any DevSecOps team is to secure secrets — that is, the passwords and access credentials that allow access to services and applications. Marks noted that despite many respondents having tools in place for secret scanning or detection, the highest number of incidents (32%) were from secrets stolen from a repository. The study also included data on frequency of usage of tools. She said that it showed that scanning takes place periodically, including daily, multiple times per week, or weekly, but this was not aligned with code pushes or development processes. "So, this is an area for much improvement as scanning takes resources and time and should align better with developer workflows," Marks said. ...  Having so many tools can introduce such challenges as gaining consistency across development teams, dealing with alert fatigue, or determining which remediations are needed and/or how remediation can mitigate risk. ... "Instead, a third-party platform that can support multiple tools can serve as a governance layer to help orchestrate the usage of needed tools, collect data, and help security teams more efficiently gain the visibility they need, apply the right controls and processes, and determine needed actions."


Exclusive: How Piramidal is using AI to decode the human brain

The company is first fine-tuning its model for the neuro ICU; that product will be able to ingest EEG data and interpret in near-real time, providing outputs to medical staff on occurrence and diagnosis of disorders such as seizures, traumatic brain bleeding, inflammations and other brain dysfunctions. “It is truly an assistant to the doctor,” said Pahuja, noting that the model can ideally help provide quicker and more accurate diagnoses that can save doctors’ time and get patients the care they need much more quickly. “Brainwaves are central to neurology diagnosis,” Piramidal co-founder and CEO Dimitris Sakellariou, who holds a PhD in neuroscience, told VentureBeat. By automating analysis and enhancing understanding through large models, personalized treatment can be revolutionized and diseases can be predicted earlier in their progression, he noted. And, as wireless EEG sensors become more mainstream, models like Piramidal’s can enable the creation of personalized agents that “continuously measure and monitor brain health.” “These agents will offer real-time insights into how patients respond to new treatments and how their conditions may evolve,” said Sakellariou.


What is ‘model collapse’? An expert explains the rumours about an impending AI doom

There are hints developers are already having to work harder to source high-quality data. For instance, the documentation accompanying the GPT-4 release credited an unprecedented number of staff involved in the data-related parts of the project. We may also be running out of new human data. Some estimates say the pool of human-generated text data might be tapped out as soon as 2026. It’s likely why OpenAI and others are racing to shore up exclusive partnerships with industry behemoths such as Shutterstock, Associated Press and NewsCorp. They own large proprietary collections of human data that aren’t readily available on the public internet. However, the prospects of catastrophic model collapse might be overstated. Most research so far looks at cases where synthetic data replaces human data. In practice, human and AI data are likely to accumulate in parallel, which reduces the likelihood of collapse. The most likely future scenario will also see an ecosystem of somewhat diverse generative AI platforms being used to create and publish content, rather than one monolithic model. This also increases robustness against collapse.


Custodians looking to beat offenders in the GenAI cybersecurity battle

While the security, as well as the global technology community, seem united at somehow regulating this new-age technology, there are a limited number of things that can actually be done. “There are two ways to combat attacks enabled by the widespread use of GenAI,” Kashifuddin said. “For internal threats, it comes down to deploying ‘cyber for GenAI’. For external threats, the use of ‘GenAI for cyber’ defense is becoming more of a reality and evolving quickly.” The use of cyber for Gen AI threats simply means applying fundamental controls to protect company resources from a GenAI-based attack, he explained. “Traditional data protection tools like Data Loss Prevention (DLP), Cloud Access Security Broker (CASB) when used in conjunction with web proxies amplify a company’s ability to detect and restrict exfiltration of sensitive data to external GenAI services.” “GenAI for cyber” refers to a growing class of techniques using GenAI to combat GenAI-induced attacks. Apart from advanced phishing detection and automated incident response, this includes a bunch of new ways to better the model in order to neutralize adversarial activities. “The discipline of protecting AI systems is just beginning to evolve, but there are some interesting techniques for that already,” Barros said. 


New phishing method targets Android and iPhone users

ESET analysts discovered a series of phishing campaigns targeting mobile users that used three different URL delivery mechanisms. These mechanisms include automated voice calls, SMS messages, and social media malvertising. The voice call delivery is done via an automated call that warns the user about an out-of-date banking app and asks the user to select an option on the numerical keyboard. After the correct button is pressed, a phishing URL is sent via SMS, as was reported in a tweet. Initial delivery by SMS was performed by sending messages indiscriminately to Czech phone numbers. The message sent included a phishing link and text to socially engineer victims into visiting the link. The malicious campaign was spread via registered advertisements on Meta platforms like Instagram and Facebook. These ads included a call to action, like a limited offer for users who “download an update below.” After opening the URL delivered in the first stage, Android victims are presented with two distinct campaigns, either a high-quality phishing page imitating the official Google Play store page for the targeted banking application, or a copycat website for that application. From here, victims are asked to install a “new version” of the banking app.


The Cloud Talent Crisis: Skills Shortage Drives Up Costs, Risks

"Cloud complexity is growing by the day, and with it, the challenge of responding to security threats," he said. "Organizations need more skilled engineers to deal with attacks — or even notice them." He noted that phishing attacks, password leakage, and third-party attacks — the three biggest threats reported in this year's survey — are even more dangerous without skilled, well-resourced personnel. ... "For me, cloud waste is the biggest concern," he said. "It means more money goes where it shouldn't, less money is available to hire talented staff, and fewer resources are available to that staff." ... "This approach can result in a faster pace of innovation, better mapping of features with customer requirements, and additional cost savings opportunities," he explained. Some components of this strategy include working backwards from the customer, organizing teams around products, keeping development teams small, and reducing risk through iteration. "There is a clear correlation between a lack of skilled talent and a lack of cloud maturity," O'Neill said. "High-maturity organizations tend to establish cloud principles and then strictly adhere to them."


The critical imperative of data center physical security: Navigating compliance regulations

In an increasingly digital world where data is often considered the new currency, data centers serve as the fortresses that safeguard the invaluable assets of organizations. While we often associate data security with firewalls, encryption, and cyber threats, it's imperative not to overlook the significance of physical security within these data fortresses. By assessing risks associated with physical security, environmental factors, and access controls, data center operators can take proactive measures to mitigate said risks. These measures greatly aid data centers in preventing unauthorized access, which can lead to data theft, service disruptions, and financial losses. Additionally, failing to meet compliance regulations can result in severe legal consequences and damage to an organization's reputation. In a perfect world, simply implementing iron-clad physical barriers and adhering to compliance regulations would completely eliminate the risk of data breaches. Unfortunately, that’s simply not the case. Both data center security and compliance encompass not only both cybersecurity and physical security, but secure data sanitization and destruction as well. 



Quote for the day:

"Personal leadership is the process of keeping your vision and values before you and aligning your life to be congruent with them." -- Stephen R. Covey

Daily Tech Digest - August 20, 2024

Humanoid robots are a bad idea

Humanoid robots that talk, perceive social and emotional cues, elicit empathy and trust, trigger psychological responses through eye contact and who trick us into the false belief that they have inner thoughts, intentions and even emotions create for humanity what I consider a real problem. Our response to humanoid robots is based on delusion. Machines — tools, really — are being deliberately designed to hack our human hardwiring and deceive us into treating them as something they’re not: people. In other words, the whole point of humanoid robots is to dupe the human mind, to mislead us into have the kind of connection with these machines formerly reserved exclusively for other human beings. Why are some robot makers so fixated on this outcome? Why isn’t the goal instead to create robots that are perfectly designed for their function, rather than perfectly designed to trick the human mind? Why isn’t there a movement to make sure robots do not elicit false emotions and beliefs. What’s the harm in preserving our intuition that a robot is just a machine, just a tool? Why try to route around that intuition with machines that trick our minds, coopting or hijacking our human empathy?


11 Irritating Data Quality Issues

Organizations need to put data quality first and AI second. Without dignifying this sequence, leaders fall into fear of missing out (FOMO) in attempts to grasp AI-driven cures to either competitive or budget pressures, and they jump straight into AI adoption before conducting any sort of honest self-assessment as to the health and readiness of their data estate, according to Ricardo Madan, senior vice president at global technology, business and talent solutions provider TEKsystems. “This phenomenon is not unlike the cloud migration craze of about seven years ago, when we saw many organizations jumping straight to cloud-native services, after hasty lifts-and-shifts, all prior to assessing or refactoring any of the target workloads. This sequential dysfunction results in poor downstream app performance since architectural flaws in the legacy on-prem state are repeated in the cloud,” says Madan in an email interview. “Fast forward to today, AI is a great ‘truth serum’ informing us of the quality, maturity, and stability of a given organization’s existing data estate -- but instead of facing unflattering truths, invest in holistic AI data readiness first, before AI tools."


CISOs urged to prepare now for post-quantum cryptography

Post-quantum algorithms often require larger key sizes and more computational resources compared to classical cryptographic algorithms, a challenge for embedded systems, in particular. During the transition period, systems will need to support both classical and post-quantum algorithms to support interoperability with legacy systems. Deidre Connolly, cryptography standardization research engineer at SandboxAQ, explained: “New cryptography generally takes time to deploy and get right, so we want to have enough lead time before quantum threats are here to have protection in place.” Connolly added: “Particularly for encrypted communications and storage, that material can be collected now and stored for a future date when a sufficient quantum attack is feasible, known as a ‘Store Now, Decrypt Later’ attack: upgrading our systems with quantum-resistant key establishment protects our present-day data against upcoming quantum attackers.” Standards bodies, hardware and software manufacturers, and ultimately businesses across the globe will have to implement new cryptography across all aspects of their computing systems. Work is already under way, with vendors such as BT, Google, and Cloudflare among the early adopters.


AI for application security: Balancing automation with human oversight

Security testing should be integrated throughout Application Delivery Pipelines, from design to deployment. Techniques such as automated vulnerability scanning, penetration testing, continuous monitoring, and many others are essential. By embedding compliance and risk assessment tasks into underlying change management processes, IT professionals can ensure that security testing is at the core of everything they do. Incorporating these strategies at the application component level ensures alignment with business needs to effectively prioritize results, identify attacks, and mitigate risks before they impact the network and infrastructure. ... To build a security-first mindset, organizations must embed security best practices into their culture and workflows. If new IT professionals coming into an organization are taught that security-first isn’t a buzzword, but instead the way the organization operates, it becomes company culture. Making security an integral part of the application delivery pipelines ensures that security policies and processes align with business goals. Education and communication are key—security teams must work closely with developers to ensure that security requirements are understood and valued. 


TSA biometrics program is evolving faster than critics’ perceptions

Privacy impact assessments (PIAs) are not only carried out for each new or changed process, but also published and enforced. The images of U.S. citizens captured by the TSA may be evaluated and used for testing, but they are deleted within 12 hours. Travelers have the choice of opting out of biometric identity verification, in which case they go through a manual ID check, just like decades ago. As happened previously with body scanners, TSA has adapted the signage it uses to notify the public about its use of biometrics. Airports where TSA uses biometrics now have signs that state in bold letters that participation is optional, explain how it works and include QR codes for additional information. The technology is also highly accurate, with tests showing 99.97% accurate verifications. In the cases which do not match, the traveler must then go through the same manual procedure used previously and also in cases where people opt out. TSA does not use biometrics to match people against mugshots from local police departments, for deportations or surveillance. In contrast, the proliferation of CCTV cameras observing people on their way to the airport and back home is not mentioned by Senator Merkley.


Blockchain: Redefining property transactions and ownership

Blockchain’s core strength lies in its ability to create a secure, immutable ledger of transactions. In the real estate context, this means that all details related to a property transaction— from the initial agreement to the final transfer of ownership—are recorded in a way that cannot be altered or tampered with. Blockchain technology empowers brokers to streamline transactions and enhance transparency, allowing them to focus on offering personalised insights and strategic advice. This shift enables brokers to provide a more efficient and cost-effective service while maintaining their advisory role in the real estate process. Another innovative application of blockchain in real estate is through smart contracts. These are digital contracts that automatically execute when certain conditions are met, ensuring that the terms of an agreement are fulfilled without the need for manual oversight. In real estate, smart contracts can be used to automate everything from title transfers to escrow arrangements. This automation not only speeds up the process but also reduces the chances of disputes, as all terms are clearly defined and executed by the technology itself. Beyond improving the efficiency of transactions, blockchain also has the potential to change how we think about property ownership. 


Agile Reinvented: A Look Into the Future

There’s no denying that agile is poised at a pivotal juncture, especially given the advent of AI. While no one knows how AI will influence agile in the long term, it is already shaping how agile teams are structured and how its members approach their work, including using AI tools to code or write user stories and jobs to be done. To remain relevant and impactful, agile must be responsive to the evolving needs of the workforce. Younger developers, in particular, seek more room for creativity. New approaches to agile team formation—including Team and Org Topologies or FaST, which relies on elements of dynamic reteaming instead of fixed team structures to tackle complex work—are emerging to create space for innovation. Since agile was built upon the values of putting people first and adapting to change, it can, and should, continue to empower teams to drive innovation within their organizations. This is the heart of modern agile: not blindly adhering to a set of rules but embracing and adapting its principles to your team’s unique circumstances. As agile continues to evolve, we can expect to see it applied in even more varied and innovative ways. For example, it already intersects with other methodologies like DevSecOps and Lean to form more comprehensive frameworks. 


Breaking Free from Ransomware: Securing Your CI/CD Against RaaS

By embracing a proactive DevSecOps mindset, we can repel RaaS attacks and safeguard our code. Here’s your toolkit: ... Don’t wait until deployment to tighten the screws. Integrate security throughout the software development life cycle (SDLC). Leverage software composition analysis (SCA) and software bill of materials (SBOM) creation, helping you scrutinize dependencies for vulnerabilities and maintain a transparent record of every software component in your pipeline. ... Your pipelines aren’t static entities; they are living ecosystems demanding constant Leveraging tools to implement continuous monitoring and logging of pipeline activity. Look for anomalies, suspicious behaviors and unauthorized access attempts. Think of it as having a cybersecurity hawk perpetually circling your pipelines, detecting threats before they take root. ... Minimize unnecessary access to your CI/CD environment. Enforce strict role-based access controls and least privilege Utilize access control tools to manage user roles and permissions tightly, ensuring only authorized users can interact with sensitive resources. Remember, the 2022 GitHub vulnerability exposed the dangers of lax access control in CI/CD environments.


Achieving cloudops excellence

Although there are no hard-and-fast rules regarding how much to spend on cloudops as a proportion of the cost of building or migrating applications, I have a few rules of thumb. Typically, enterprises should spend 30% to 40% of their total cloud computing budget on cloud operations and management. This covers monitoring, security, optimization, and ongoing management of cloud resources. ... Cloudops requires a new skill set. Continuous training and development programs that focus on operational best practices are vital. This transforms the IT workforce from traditional system administrators to cloud operations specialists who are adept at leveraging cloud environments’ nuances for efficiency. Beyond technical implementations, enterprise leaders must cultivate a culture that prioritizes operational readiness as much as innovation. The essential components are clear communication channels, cross-departmental collaboration, and well-defined roles. Organizational coherence enables firms to pivot and adapt swiftly to the changing tides of technology and market demands. It’s also crucial to measure success by deployment achievements and ongoing performance metrics. By setting clear operational KPIs from the outset, companies ensure that cloud environments are continuously aligned with business objectives. 


What high-performance IT teams look like today — and how to build one

“Today’s high-performing teams are hybrid, dynamic, and autonomous,” says Ross Meyercord, CEO of Propel Software. “CIOs need to create a clear vision and articulate and model the organization’s values to drive alignment and culture.” High-performance teams are self-organizing and want significant autonomy in prioritizing work, solving problems, and leveraging technology platforms. But most enterprises can’t operate like young startups with complete autonomy handed over to devops and data science teams. CIOs should articulate a technology vision that includes agile principles around self-organization and other non-negotiables around security, data governance, reporting, deployment readiness, and other compliance areas. ... High-performance teams are often involved in leading digital transformation initiatives where conflicts around priorities and solutions among team members and stakeholders can arise. These conflicts can turn into heated debates, and CIOs sometimes have to step in to help manage challenging people issues. “When a CIO observes misaligned goals or intra-IT conflict, they need to step in immediately to prevent organizational scar tissue from forming,” says Meyercord of Propel Software. 



Quote for the day:

"Don't necessarily and sharp edges. Occasionally they are necessary to leadership." -- Donald Rumsfeld

Daily Tech Digest - August 17, 2024

The importance of connectivity in IoT

There is no point in having IoT if the connectivity is weak. Without reliable connectivity, the data from sensors and devices, which are intended to be collected and analysed in real-time, might end up being delayed when they are eventually delivered. In healthcare, in real-time, connected devices monitor the vital signs of the patient in an intensive-care ward and alert the physician to any observations that are outside of the specified limits. ...  The future evolution of connectivity technologies will combine with IoT to significantly expand its capabilities. The arrival of 5G will enable high-speed, low-latency connections. This transition will usher in IoT systems that were previously impossible, such as self-driving vehicles that instantaneously analyse vehicle states and provide real-time collision avoidance. The evolution of edge computing will bring data-processing closer to the edge (the IoT devices), thereby significantly reducing latency and bandwidth costs. Connectivity underpins almost everything we see as important with IoT – the data exchange, real-time usage, scale and interoperability we access in our systems.


Aren’t We Transformed Yet? Why Digital Transformation Needs More Work

When it comes to enterprise development, platforms alone can’t address the critical challenge of maintaining consistency between development, test, staging, and production environments. What teams really need to strive for is a seamless propagation of changes between environments made production-like through synchronization and have full control over the process. This control enables the integration of crucial safety steps such as approvals, scans, and automated testing, ensuring that issues are caught and addressed early in the development cycle. Many enterprises are implementing real-time visualization capabilities to provide administrators and developers with immediate insight into differences between instances, including scoped apps, store apps, plugins, update sets, and even versions across the entire landscape. This extended visibility is invaluable for quickly identifying and resolving discrepancies before they can cause problems in production environments. A lack of focus on achieving real-time multi-environment visibility is akin to performing a medical procedure without an X-ray, CT, or MRI of the patient. 


Why Staging Doesn’t Scale for Microservice Testing

So are we doomed to live in a world where staging is eternally broken? As we’ve seen, traditional approaches to staging environments are fraught with challenges. To overcome these, we need to think differently. This brings us to a promising new approach: canary-style testing in shared environments. This method allows developers to test their changes in isolation within a shared staging environment. It works by creating a “shadow” deployment of the services affected by a developer’s changes while leaving the rest of the environment untouched. This approach is similar to canary deployments in production but applied to the staging environment. The key benefit is that developers can share an environment without affecting each other’s work. When a developer wants to test a change, the system creates a unique path through the environment that includes their modified services, while using the existing versions of all other services. Moreover, this approach enables testing at the granularity of every code change or pull request. This means developers can catch issues very early in the development process, often before the code is merged into the main branch. 


A world-first law in Europe is targeting artificial intelligence. Other countries can learn from it

The act contains a list of prohibited high-risk systems. This list includes AI systems that use subliminal techniques to manipulate individual decisions. It also includes unrestricted and real-life facial recognition systems used by by law enforcement authorities, similar to those currently used in China. Other AI systems, such as those used by government authorities or in education and healthcare, are also considered high risk. Although these aren’t prohibited, they must comply with many requirements. ... The EU is not alone in taking action to tame the AI revolution. Earlier this year the Council of Europe, an international human rights organisation with 46 member states, adopted the first international treaty requiring AI to respect human rights, democracy and the rule of law. Canada is also discussing the AI and Data Bill. Like the EU laws, this will set rules to various AI systems, depending on their risks. Instead of a single law, the US government recently proposed a number of different laws addressing different AI systems in various sectors. ... The risk-based approach to AI regulation, used by the EU and other countries, is a good start when thinking about how to regulate diverse AI technologies.


Building constructive partnerships to drive digital transformation

The finance team needs to have a ‘seat at the table’ from the very beginning to overcome these challenges and effect successful transformation. Too often, finance only becomes involved when it comes to the cost and financing of the project, and when finance leaders do try to become involved, they can have difficulty gaining access to the needed data. This was recently confirmed by members of the Future of Finance Leadership Advisory Group, where almost half of the group polled (47%) noted challenges gaining access to needed data. As finance professionals understand the needs of stakeholders within the business, they are in the best position to outline what is needed for IT to create an effective, efficient structure. Finance professionals are in-house consultants who collaborate with other functions to understand their workings and end-to-end procedures, discover where both problems and opportunities exist, identify where processes can be improved, and ultimately find solutions. Digital transformation projects rely on harmonizing processes and standardizing systems across different operations. 


DevSecOps: Integrating Security Into the DevOps Lifecycle

The core of DevSecOps is ‘security as code’, a principle that dictates embedding security into the software development process. To keep every release tight on security, we weave those practices into the heart of our CI/CD flow. Automation is key here, as it smooths out the whole security gig in our dev process, ensuring we are safe from the get-go without slowing us down. A shared responsibility model is another pillar of DevSecOps. Security is no longer the sole domain of a separate security team but a shared concern across all teams involved in the development lifecycle. Working together, security isn’t just slapped on at the end but baked into every step from start to finish. ... Adopting DevSecOps is not without its challenges. Shifting to DevSecOps means we’ve got to knock down the walls that have long kept our devs, ops and security folks in separate corners. Balancing the need for rapid deployment with security considerations can be challenging. To nail DevSecOps, teams must level up their skills through targeted training. Weaving together seasoned systems with cutting-edge DevSecOps tactics calls for a sharp, strategic approach. 


Critical Android Vulnerability Impacting Millions of Pixel Devices Worldwide

This backdoor vulnerability, undetectable by standard security measures, allows unauthorized remote code execution, enabling cybercriminals to compromise devices without user intervention or knowledge due to the app’s privileged system-level status and inability to be uninstalled. The Showcase.apk application possesses excessive system-level privileges, enabling it to fundamentally alter the phone’s operating system despite performing a function that does not necessitate such high permissions. An application’s configuration file retrieval lacks essential security measures, such as domain verification, potentially exposing the device to unauthorized modifications and malicious code execution through compromised configuration parameters. The application suffers from multiple security vulnerabilities. Insecure default variable initialization during certificate and signature verification allows bypass of validation checks. Configuration file tampering risks compromise, while the application’s reliance on bundled public keys, signatures, and certificates creates a bypass vector for verification.


Using Artificial Intelligence in surgery and drug discovery

“We’re seeing how AI is adapting, learning, and starting to give us more suggestions and even take on some independent tasks. This development is particularly thrilling because it spans across diagnostics, therapeutics, and theranostics—covering a wide range of medical areas. We’re on the brink of AI and robotics merging together in a very meaningful way,” Dr Rao said. However, he said he would like to add a word of caution. He said he often tells junior enthusiasts who are eager to use AI in everything: AI is not a replacement for natural stupidity. ... He said that one of the most impressive applications of this AI was during the preparation of a US FDA application, which is typically a very cumbersome and expensive process. “At that point, I’d already completed the preclinical phase but wasn’t certain about the additional 20-30 tests I might need. Instead of spending hundreds of thousands of dollars on trial and error, we fed all our data into this AI system. Now, it’s important to note that pharma companies are usually reluctant to share their proprietary data, so gathering information is often a challenge,” he said.  


Mastercard Is Betting on Crypto—But Not Stablecoins

“We’re opening up this crypto purchase power to our 100 million-plus acceptance locations,” Raj Dhamodharan, Mastercard's head of crypto and blockchain, told Decrypt. “If consumers want to buy into it, if they want to be able to use it, we want to enable that—in a safe way.” Perhaps in the name of safety, the new MetaMask Card isn’t compatible with most cryptocurrencies. You can’t use it to buy a plane ticket with Pepecoin, or a sandwich with SHIB. The card is only compatible with dominant stablecoins USDT and USDC, as well as wrapped Ethereum. ... Dhamodharan and his team are currently endeavoring to create an alternative system to stablecoins that—instead of putting crypto companies like Circle and Tether in the catbird seat of the new digital economy—keeps payment services like Mastercard, and traditional banks, at center. Key to this plan is unlocking the potential of bank deposits, which already exist on digital ledgers—just not ones that live on-chain. Dhamodharan estimates that some $15 trillion worth of digital bank deposits currently exist in the United States alone.


A Group Linked To Ransomhub Operation Employs EDR-Killing Tool

Experts believe RansomHub is a rebrand of the Knight ransomware. Knight, also known as Cyclops 2.0, appeared in the threat landscape in May 2023. The malware targets multiple platforms, including Windows, Linux, macOS, ESXi, and Android. The operators used a double extortion model for their RaaS operation. Knight ransomware-as-a-service operation shut down in February 2024, and the malware’s source code was likely sold to the threat actor who relaunched the RansomHub operation. ... “One main difference between the two ransomware families is the commands run through cmd.exe. While the specific commands may vary, they can be configured either when the payload is built or during configuration. Despite the differences in commands, the sequence and method of their execution relative to other operations remain the same.” states the report published by Symantec. Although RansomHub only emerged in February 2024, it has rapidly grown and, over the past three months, has become the fourth most prolific ransomware operator based on the number of publicly claimed attacks.



Quote for the day:

"When your values are clear to you, making decisions becomes easier." -- Roy E. Disney

Daily Tech Digest - August 16, 2024

W3C issues new technical draft for verifiable credentials standards

Part of the promise of the W3C standards is the ability to share only the data that’s necessary for a completing a secure digital transaction, Goodwin explained, noting that DHS’s Privacy Office is charged with “embedding and enforcing privacy protections and transparency in all DHS activities.” DHS was brought into the process to review the W3C Verifiable Credentials Data Model and Decentralized Identifiers framework and to advise on potential issues. DHS S&T said in a statement last month that “part of the promise of the W3C standards is the ability to share only the data required for a transaction,” which it sees as “an important step towards putting privacy back in the hands of the people.” “Beyond ensuring global interoperability, standards developed by the W3C undergo wide reviews that ensure that they incorporate security, privacy, accessibility, and internationalization,” said DHS Silicon Valley Innovation Program Managing Director Melissa Oh. “By helping implement these standards in our digital credentialing efforts, S&T, through SVIP, is helping to ensure that the technologies we use make a difference for people in how they secure their digital transactions and protect their privacy.”


Managing Technical Debt in the Midst of Modernization

Rather than delivering a product and then worrying about technical debt, it is more prudent to measure and address it continuously from the early stages of a project, including requirement and design, not just the coding phase. Project teams should be incentivized to identify improvement areas as part of their day-to-day work and implement the fixes as and when possible. Early detection and remediation can help streamline IT operations, improve efficiencies, and optimize cost. ... Inadequate technical knowledge or limited experience in the latest skills itself leads to technical debt. Enterprises must invest and prioritize continuous learning to keep their talent pool up to date with the latest technologies. A skill-gap analysis helps forecast the need for skills for future initiatives. Teams should be encouraged to upskill in AI, cloud, and other latest technologies, as well as modern design and security standards. This will help enterprises address the technical debt skill-gap effectively. Enterprises can also employ a hub and spoke model, where a central team offers automation and expert guidance while each development team maintains their own applications, systems and related technical debt.


Generative AI Adoption: What’s Fueling the Growth?

The banking, financial services, and insurance (BFSI) sector is another area where generative AI is making a significant impact. In this industry, generative AI enhances customer service, risk management, fraud detection, and regulatory compliance. By automating routine tasks and providing more accurate and timely insights, generative AI helps financial institutions improve efficiency and deliver better services to their customers. For instance, generative AI can be used to create personalized customer experiences by analyzing customer data and predicting their needs. This capability allows banks to offer tailored products and services, improving customer satisfaction and loyalty. ... The life sciences sector stands to benefit enormously from the adoption of generative AI. In this industry, generative AI is used to accelerate drug discovery, facilitate personalized medicine, ensure quality management, and aid in regulatory compliance. By automating and optimizing various processes, generative AI helps life sciences companies bring new treatments to market more quickly and efficiently. For instance, generative AI can largely draw on masses of biological data to find a probable medication, much faster than conventional means. 


Overcoming Software Testing ‘Alert Fatigue’

Before “shift left” became the norm, developers would write code that quality assurance testing teams would then comb through and identify the initial bugs in the product. Developers were then only tasked with reviewing the proofed end product to ensure it functioned as they initially envisioned. But now, the testing and quality control onus has been put on developers earlier and earlier. An outcome of this dynamic is that developers are becoming increasingly numb to the high volume of bugs they are coming across in the process, and as a result, they are pushing bad code to production. ... Organizations must ensure that vital testing phases are robust and well-defined to mitigate these adverse outcomes. These phases should include comprehensive automated testing, continuous integration (CI) practices, and rigorous manual testing by dedicated QA teams. Developers should focus on unit and integration tests, while QA teams handle system, regression acceptance, and exploratory testing. This division of labor enables developers to concentrate on writing and refining code while QA specialists ensure the software meets the highest quality standards before production.


SSD capacities set to surge as industry eyes 128 TB drives

Maximum SSD capacity is expected to double from its current 61.44 TB maximum by mid-2025, giving us 122 TB and even 128 TB drives, with the prospect of exabyte-capacity racks. Five suppliers have discussed and/or demonstrated prototypes of 100-plus TB capacity SSDs recently. ... Systems with enclosures full of high-capacity SSDs will need to cope with drive failure and that means RAID or erasure coding schemes. SSD rebuilds take less time than HDD rebuilds but higher-capacity SSDs take longer. Looking at a 61.44 TB Solidigm D5-P5336 drive, its max sequential write bandwidth is 3 GBps. For example, rebuilding a 61.44 TB Solidigm D5-P5336 drive with a max sequential write bandwidth of 3 GBps would take approximately 5.7 hours. A 128 TB drive will take 11.85 hours at the same 3 GBps write rate. These are not insubstantial periods. Kioxia has devised an SSD RAID parity compute offload scheme with a parity compute block in the SSD controller and direct memory access to neighboring SSDs to get the rebuild data. This avoids the host server’s processor getting involved in RAID parity compute IO and could accelerate SSD rebuild speed.


Putting Individuals Back In Charge Of Their Own Identities

Digital identity comprises many signals to ensure it can accurately reflect the real identity of the relevant individual. It includes biometric data, ID data, phone data, and much more. In shareable IDs, these unique features are captured through a combination of AI and biometrics which provide robust protection against forgery and replication, and so provide a high assurance that a person is who they say they are. Importantly, these technologies provide an easy and seamless alternative to other verification processes. For most people, visiting a bank branch to prove their identity with paper documents is no longer convenient, while knowledge-based authentication, like entering your mother’s maiden name, is not viable because data breaches make this information readily for sale to nefarious actors. It’s no wonder that 76% of consumers find biometrics more convenient, while 80% find it more secure than other options.  ... A shareable identity is a user-controlled identity credential that can be stored on a device and used remotely. Individuals can then simply re-use the same digital ID to gain access to services without waiting in line, offering time-saving convenience for all.


Revolutionizing cloud security with AI

Generative AI can analyze data from various sources, including social media, forums, and the dark web. AI models use this data to predict threat vectors and offer actionable insights. Enhanced threat intelligence systems can help organizations better understand the evolving threat landscape and prepare for potential attacks. Moreover, machine learning algorithms can automate threat detection across cloud environments, increasing the efficiency of incident response times. ... AI-driven automation is becoming helpful in handling repetitive security tasks, allowing human security professionals to focus on more complex challenges. Automation helps streamline and triage alerts, incident response, and vulnerability management. AI algorithms can process incident data faster than human operators, enabling quicker resolution and minimizing potential damage. ... AI models can enforce privacy policies by monitoring data access while ensuring compliance with regulations such as the General Data Protection Regulation in the U.K., or the California Consumer Privacy Act. When bolstered by AI, homomorphic encryption and differential privacy techniques offer ways to analyze data while keeping sensitive information secure and anonymous.


Are CIOs at the Helm of Leading Generative AI Agenda?

The growing integration of generative AI into corporate technology and information infrastructures is likely to bring a notable shift to the role of CIOs. While many technology leaders are already spearheading gen AI adoption, their role goes beyond technology management. It now includes driving strategic growth and maintaining a competitive edge in an AI-driven landscape. ... The CIO role has evolved significantly over recent decades. Once focused primarily on maintaining system uptime and availability, CIOs now serve as key business enablers. As technology advances rapidly and organizations increasingly rely on IT, the CIO's influence on enterprise success continues to grow. According to the EY survey, CIOs who report directly to the CEO and co-lead the AI agenda are the most effective in driving strategic change. Sixty-three percent of CIOs are leading the gen AI agenda in their organizations, with CEOs close behind at 55%. Eighty-four percent of organizations where the gen AI agenda is co-led by the CIO and CEO achieve or anticipate achieving a 2x return on investment from gen AI, compared to only 56% of organizations where the agenda is led solely by CIOs.


Intel and Karma partner to develop software-defined car architecture

Instead of all those individual black boxes, each with a single job, the new approach is to consolidate the car's various functions into domains, with each domain being controlled by a relatively powerful car computer. These will be linked via Ethernet, usually with a master domain controller overseeing the entire network. We're already starting to see vehicles designed with this approach; the McLaren Artura, Audi Q6 e-tron, and Porsche Macan are all recent examples of software-defined vehicles. Volkswagen Group—which owns Audi and Porsche—is also investing $5 billion in Rivian specifically to develop a new software-defined vehicle architecture for future electric vehicles. In addition to advantages in processing power and weight savings, software-defined vehicles are easier to update over-the-air, a must-have feature since Tesla changed that paradigm. Karma and Intel say their architecture should also have other efficiency benefits. ... Intel is also contributing its power management SoC to get the most out of inverters, DC-DC converters, chargers, and as you might expect, the domain controllers use Intel silicon as well, apparently with some flavor of AI enabled.


Why the next Ashley Madison is just around the corner

Unfortunately, it’s not a matter of ‘if’ another huge data breach will occur – it’s simply a matter of when. Today organisations of all sizes, not just the big players, have a ticking time bomb on their hands with the potential to detonate their brand reputation and destroy customer loyalty. ... Due to a lack of dedicated cybersecurity teams and finite financial resources to allocate to protective measures, small organisations will often prove easier to successfully infiltrate when compared to the average big player. The potential reward from a single attack may be smaller, but hackers can combine successful attacks against multiple SMEs to match the financial gain of successfully hacking a large organisation, and with far less effort. SMEs are therefore increasingly likely to fall victim to financially crippling attacks, with 46% of all cyber breaches now impacting businesses with fewer than 1,000 employees. ... The very first step in any attack chain is always the use of tools to gather intelligence about the victims systems, version numbers of (not patched) software in use and insecure configuration or programming. Any hacker, whether a professional or amateur, is using scanning bots or relying on websites like Shodan.io, generating an attack list of victims with vulnerable software. 



Quote for the day:

“No one knows how hard you had to fight to become who you are today.” -- Unknown

Daily Tech Digest - August 15, 2024

Better Cloud Security Means Getting Back to Basics

Securing the cloud isn’t rocket science – it just requires a little extra knowledge. While it’s tempting to think of the cloud as a new frontier in computing (and, in some ways, it is), cloud security solutions have been around for almost as long as the cloud itself. The trouble is that most organizations don’t know how they should think about cloud security in the first place. ... A good starting point for many organizations is simply evaluating how effective their existing cloud security is. It isn’t enough to implement security solutions – even if they’re the right solutions. It’s also important to know that they are functioning as intended. Today’s organizations have more testing and validation tools at their fingertips than ever, and conducting breach and attack simulation, automated red teaming, and other exercises can lay bare where vulnerabilities and inefficiencies exist. Recent testing reveals that the basic security suites offered by the leading cloud providers are not enough to detect all – or even most – attack activity, highlighting the areas where organizations need to implement new protections and providing insight into what additional solutions may be necessary.


Cloud Waste Management: How to Optimize Your Cloud Resources

To better understand cloud waste, we need to understand the iron triangle of project management, which states that there is always a tradeoff between speed, quality, and cost. If you want to deliver a quality product/feature quickly, it will cost you more. Businesses are always trying to innovate and deliver continuous value to their customers. Often, it means putting pressure on the delivery teams to improve time to market. As an effect, there is the over provisioned capacity of resources; multiple resources that were provisioned to validate theory or concept were not deleted as the teams moved on either delivering the accepted solutions or to another project assignment. This is one of the major factors of cloud waste. ... Since you pay for each resource provisioned in the cloud, managing cloud waste becomes critical, as it directly impacts your business’s bottom line. CFOs and finance teams struggle to manage the forecast and budget for cloud spend as they never know what capacity is wasted in the cloud, and there is no good way to review it regularly.


Campus NaaS: Transforming Enterprise Networking

The flexibility of the NaaS model allows businesses to experiment with new technologies and use cases without the risk of large, upfront investments in hardware and expertise. This is particularly valuable as emerging technologies like AI and edge computing become more prevalent in enterprise environments. ... The potential benefits of Campus NaaS are significant and organizations must carefully evaluate potential NaaS providers. Standards-based solutions ensure interoperability between different NaaS components and service providers allowing businesses to seamlessly integrate NaaS solutions from various vendors without compatibility issues. Security capabilities, and long-term roadmaps should also be considered. Campus NaaS is poised to play a pivotal role in shaping the future of enterprise networking, enabling businesses to build the agile, high-performance foundations needed to thrive in an increasingly digital world. As the technology continues to evolve and mature, we can expect to see even more innovative use cases and deployment models emerge, further cementing the role of Campus NaaS as a cornerstone of modern enterprise IT strategy. 


Applying Security Everywhere – How to Prioritise Risks Across Multiple Platforms

For IT architects and security teams, the joint challenge here is actually one of the oldest ones in IT – knowing what you have. Getting an accurate inventory of all your software assets and components is a hard task on one platform, let alone across internal datacenter deployments, web applications, public cloud implementations and modern cloud-native applications. Keeping this inventory up to date is harder still, given how much change will take place over time across the entire application estate. Alongside this inventory, there are other factors to consider. Not all applications are created equal, and an issue in an internal web application that is used by a few people every month will not be as important as a critical vulnerability in a business application that is responsible for generating revenue every day. Yet both of these applications may have a flaw, and alerts sent to request fixes or updates get made. Internal processes and workflows will also affect the situation. While security teams might spot potential issues in an application or software component like an API, they will not be responsible for making the change themselves. 


Attempting Digital Transformation? Try Embracing Team Resistance

Resistance to transformation has several causes, Dewal says. First off, many logistics professionals already feel slammed, and don’t welcome the idea of new work. “It can feel like an add-on, creating competing priorities,” she says. Then there’s a fear-based resistance to the perceived complexity of the new tasks involved. “It’s too complex and we don’t have the right skill sets to be able to execute on them,” she says, describing this mindset. “Collectively, let’s call it the fear of failure, of getting it wrong.” Finally, there’s the familiar human tendency to prefer sticking with the status quo. “That can hide variations underneath it,” Dewal says. “Sometimes the team is not even sure why the transformation is needed. Sometimes, they feel like they’re not getting enough support in terms of executing it.” Further, the survey dug into two types of resistance – productive and unproductive. Productive resistance is the type that comes from on-the-ground knowledge and expertise that relates to the implementation itself. ... Leaders who avoided a top-down, change-or-die approach, and instead focused on communication and collaboration, had much better chance of success, the survey found.


How leading CISOs build business-critical cyber cultures

In information security, where risk is widespread, attacks are becoming increasingly sophisticated, and so much is on the line, one defining attributes of successful CISOs is their courage. The good news is, courage is a muscle that can be developed just like any other. It’s also a mindset. The CISOs on this panel described various internal motivators that keep them in the game, resilient, and adaptable, even in the face of daunting challenges. They made it clear that it’s a lot easier to be courageous when you’re driven by a love for what you do and maintain a clear line of sight to the impact you’re making. One of the common threads is their focus on “moments of truth,” those points of contact between cybersecurity and various stakeholders. Leaders who are intentional about this find they’re better able to see around corners and show up more strategically as business enablers. Rodgers says it’s a lesson she learned in the early days of her career when she worked on a help desk. Fielding complaints all day takes its own kind of courage. “But the beauty of it is, you get to know people and how they work,” she says. “I got to a point where I could anticipate what they were going to want, so I started proactively providing those things. ...”


How passkeys eliminate password management headaches

There are several usability challenges that could affect the adoption of passkeys. Key among them is compatibility, as passkeys may not work on outdated operating systems or older devices. Bypassing the technical roadblocks, user resistance is often the reason for a failure to adopt new technology such as passkeys. After all, users have been leveraging passwords since the early 1960’s. Emphasizing training and education on how to provision passkeys is essential to adoption, as registration could be challenging for non-tech-savvy users. It may be best to start with small groups or departments to address unique challenges within the organization’s diverse culture and educate users. Organizations are starting to adopt passkeys to enhance security and optimize productivity, and as with any new implementation, there will be challenges. Passkey implementation should begin with top-level leadership as early adopters, which will help employees buy in and ensure a smooth transition from traditional passwords to passkeys. Upfront investment in planning, and creating robust policies and processes, will be critical to the implementation’s success.


Six Common Digital Transformation Challenges

Aligned leadership helps in allocating resources efficiently, prioritizing initiatives that drive the most value, and mitigating risks associated with digital transformation efforts. Clear, consistent communication from aligned leaders also builds trust and motivates teams to adapt to new paradigms. Ultimately, leadership alignment serves as the backbone of successful digital transformation by driving coherent strategies and fostering an environment conducive to innovation and agility. Effective communication is paramount, with transparent discussions about goals, challenges, and expected outcomes. Additionally, establishing cross-functional teams can help integrate diverse perspectives, facilitating smoother transitions during technology adoption. By embedding these practices into the organizational fabric, leaders can drive successful digital transformation while maintaining strategic coherence. Addressing resistance to change and fostering a digital mindset among leaders is pivotal in navigating this digital transformation challenge. Resistance often stems from a fear of the unknown and a reluctance to abandon established processes. 


Why Can’t Automation Eliminate Configuration Errors?

The emergence of configuration intelligence changes the game in several ways. First, it means that anyone tasked with maintaining configurations can save a lot of time and trouble that used to involve manual, tedious but cognitively intense tasks like reading through YAML manifests or config files to identify tiny errors. Yes, some tools existed to do this before, but they mostly functioned more like “linters,” spotting obvious syntax errors. By simplifying the process, time to manually maintain configs is drastically reduced. ... The lack of detailed expertise has been a traditional problem of IaC products, which struggle to keep up with configuration recommendations across the dozens of software applications and infrastructure components they manage and automate. The lack of detailed configuration expertise also creates a cadre of in-house experts, who become key sources of institutional memory — but also major risks. When your load-balancing guru walks out the door to take another job, then everything they know that’s not clearly documented goes out the door too.


Enterprise spending on cloud services keeps accelerating

“Enterprises are also choosing to house an ever-growing proportion of their data center gear in colocation facilities, further reducing the need for on-premise data center capacity. The rise of generative AI technology and services will only exacerbate those trends over the next few years, as hyperscale operators are better positioned to run AI operations than most enterprises,” he wrote. Dinsdale told me the workloads staying on-premises tend to be workloads that are either very complex and cannot easily be transitioned, are focused on highly sensitive data, are governed or influenced by regulatory issues, or are highly predictable and can be managed economically on premise. Enterprises worldwide are spending around $100 billion per year on their own data center IT hardware and associated infrastructure software, which has held flat for the last several years/ By comparison, enterprises are now spending $80 billion per quarter on cloud services; not to mention another $65 billion per quarter on SaaS. “And those cloud and SaaS numbers are growing like gangbusters,” he said.



Quote for the day:

"The whole point of getting things done is knowing what to leave undone." -- Lady Stella Reading

Daily Tech Digest - August 14, 2024

MIT releases comprehensive database of AI risks

While numerous organizations and researchers have recognized the importance of addressing AI risks, efforts to document and classify these risks have been largely uncoordinated, leading to a fragmented landscape of conflicting classification systems. ... The AI Risk Repository is designed to be a practical resource for organizations in different sectors. For organizations developing or deploying AI systems, the repository serves as a valuable checklist for risk assessment and mitigation. “Organizations using AI may benefit from employing the AI Risk Database and taxonomies as a helpful foundation for comprehensively assessing their risk exposure and management,” the researchers write. “The taxonomies may also prove helpful for identifying specific behaviors which need to be performed to mitigate specific risks.” ... The research team acknowledges that while the repository offers a comprehensive foundation, organizations will need to tailor their risk assessment and mitigation strategies to their specific contexts. However, having a centralized and well-structured repository like this reduces the likelihood of overlooking critical risks.


Why Agile Alone Might Not Be So Agile: A Witty Look at Methodology Madness

Agile’s problems often start with a fundamental misunderstanding of what it truly means to be agile. When the Agile Manifesto was penned back in 2001, its authors intended it to be a flexible, adaptable approach to software development, free from the rigid structures and bureaucratic procedures of traditional methodologies. But fast forward to today, and Agile has become its own kind of bureaucratic monster in many organizations — a tyrant disguised as a liberator. Why does this happen? Let’s dissect the two main problems: the roles defined within Agile and the one-size-fits-all mentality that organizations apply to Agile methodology. One of the biggest hurdles to successful Agile adoption is the disconnect between the executive suite and the teams on the ground. Executives often see Agile as a magic bullet for faster delivery and higher productivity, without fully understanding the nuances of the methodology. This disconnect can lead to unrealistic demands and pressure on teams to deliver more with each Sprint, which in turn leads to burnout and decreased quality. Moreover, the Agile Manifesto’s disdain for comprehensive documentation can be problematic in complex projects. 


Feature Flags Wouldn’t Have Prevented the CrowdStrike Outage

Feature flagging is a valuable technique for decoupling the release of new features from code deployment, and advanced feature flagging tools usually support percentage-based rollouts. For example, you can enable a feature on X% of targets to ensure it works before reaching 100%. While it’s true that feature flags can help to prevent outages, given the scale and complexity of the CrowdStrike incident, they would not have been sufficient for three reasons. First, a comprehensive staged rollout requires more than just “gradually enable this flag over the next few days”:There has to be an integration with the monitoring stack to perform health checks and stop the rollout if there are problems. There has to be a way to integrate with the CD pipeline to reuse the list of targets to roll out to and a list of health checks to track. Available feature flagging solutions require much work and expertise to support staged rollout at any reasonable scale. Second, CrowdStrike’s config had a complex structure requiring a “configuration system” and a “content interpreter.” Such configs would benefit from first-class schema support and end-to-end type safety. 


Putting Threat Modeling Into Practice: A Guide for Business Leaders

One of the primary benefits of threat modeling is its ability to reduce the number of defects that make it to production. By identifying potential threats and vulnerabilities during the design phase, companies can implement security measures that prevent these issues from ever reaching the production environment. This proactive approach not only improves the quality of products but also reduces the costs associated with post-production fixes and patches. ... Threat modeling helps us create reusable artifacts and reference patterns as code, which serve as blueprints for future projects. These patterns encapsulate best practices and lessons learned, ensuring that security considerations are consistently applied across all projects. By embedding these reference patterns into development processes, organizations reduce the need to reinvent the wheel for each new product, saving time and resources. ... The existence of well-defined reference patterns reduces the likelihood of errors during development. Developers can rely on these patterns as a guide, ensuring that they follow proven security practices without having to start from scratch. 


The magic of RAG is in the retrieval

The role of the LLM in a RAG system is to simply summarize the data from the retrieval model’s search results, with prompt engineering and fine-tuning to ensure the tone and style are appropriate for the specific workflow. All the leading LLMs on the market support these capabilities, and the differences between them are marginal when it comes to RAG. Choose an LLM quickly and focus on data and retrieval. RAG failures primarily stem from insufficient attention to data access, quality, and retrieval processes. For instance, merely inputting large volumes of data into an LLM with an expansive context window is inadequate if the data is excessively noisy or irrelevant to the specific task. Poor outcomes can result from various factors: a lack of pertinent information in the source corpus, excessive noise, ineffective data processing, or the retrieval system’s inability to filter out irrelevant information. These issues lead to low-quality data being fed to the LLM for summarization, resulting in vague or junk responses. It’s important to note that this isn’t a failure of the RAG concept itself. Rather, it’s a failure in constructing an appropriate “R” — the retrieval model.


What enterprises say the CrowdStrike outage really teaches

CrowdStrike made two errors, enterprises say. First, CrowdStrike didn’t account for the sensitivity of its Falcon client software for endpoints to the tabular data that described how to look for security issues. As a result, an update to that data crashed the client by introducing a condition that had existed before but hadn’t been properly tested. Second, rather than doing a limited release of the new data file that would almost certainly have caught the problem and limited its impact, CrowdStrike pushed it out to its entire user base. ... The 37 who didn’t hold Microsoft accountable pointed out that security software necessarily has a unique ability to interact with the Windows kernel software, and this means it can create a major problem if there’s an error. But while enterprises aren’t convinced that Microsoft contributed to the problem, over three-quarters think Microsoft could contribute to reducing the risk of a recurrence. Nearly as many said that they believed Windows was more prone to the kind of problem CrowdStrike’s bug created, and that view was held by 80 of the 89 development managers, many of whom said that Apple’s MacOS or Linux didn’t pose the same risk and that neither was impacted by the problem.


MIT researchers use large language models to flag problems in complex systems

The researchers developed a framework, called SigLLM, which includes a component that converts time-series data into text-based inputs an LLM can process. A user can feed these prepared data to the model and ask it to start identifying anomalies. The LLM can also be used to forecast future time-series data points as part of an anomaly detection pipeline. While LLMs could not beat state-of-the-art deep learning models at anomaly detection, they did perform as well as some other AI approaches. If researchers can improve the performance of LLMs, this framework could help technicians flag potential problems in equipment like heavy machinery or satellites before they occur, without the need to train an expensive deep-learning model. “Since this is just the first iteration, we didn’t expect to get there from the first go, but these results show that there’s an opportunity here to leverage LLMs for complex anomaly detection tasks,” says Sarah Alnegheimish, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on SigLLM.


Cybersecurity should return to reality and ditch the hype

This shift from educational content to marketing blurs the line between genuine security insights and commercial interests, leading organizations to invest in solutions that may not address their unique challenges. Additionally, buzzword-driven content has become rampant, where terms like “zero-trust architecture” or “blockchain for security” are frequently mentioned in passing without delving into the practicalities and limitations of these technologies. ... we must first recognize the critical distinction between genuine cybersecurity work and the broader tech-centric content that often overshadows it. Real cybersecurity practice is anchored in a relentless pursuit to understand and mitigate the ever-evolving threats to our systems. It is a discipline that demands deep, continuously updated knowledge of systems, networks, and human behavior, alongside a steadfast commitment to the principles of confidentiality, integrity, and availability. True cybersecurity practitioners are those who engage in the laborious tasks of vulnerability assessment, threat modeling, incident response, and the continuous enhancement of security postures, often without the allure of viral recognition or simplistic solutions.


Harnessing AI for 6G: Six Key Approaches for Technology Leaders

Leaders must understand the enabling technologies behind 6G, such as terahertz and quantum communication, and the transformative potential of AI in network deployment and management. ... Engaging with international bodies like the ITU to contribute to the standardization process is crucial. This will ensure AI technologies are integrated into network designs from the beginning. Early involvement in these discussions will also help technology leaders to anticipate future developments and prepare strategies accordingly. ... Advocating for an AI-native 6G network involves embedding large language models and other AI technology into network equipment. This strategy allows autonomous operations and optimizes network management through machine learning algorithms. Such a proactive approach will streamline operations and enhance the reliability and efficiency of the network infrastructure. ... Emphasize the convergence of computing and communication and develop user-centric services that leverage 6G and AI to improve user experiences across various industries. Leaders should focus on creating solutions that are not only technologically advanced but also address the practical needs and preferences of end-users.


GenAI compliance is an oxymoron. Ways to make the best of it

Confoundingly, genAI software sometimes does things that neither the enterprise nor the AI vendor told it to do. Whether that’s making things up (a.k.a. hallucinating), observing patterns no one asked it to look for, or digging up nuggets of highly sensitive data, it spells nightmares for CIOs. This is especially true when it comes to regulations around data collection and protection. How can CIOs accurately and completely tell customers what data is being collected about them and how it is being used when the CIO often doesn’t know exactly what a genAI tool is doing? What if the licensed genAI algorithm chooses to share some of that ultra-sensitive data with its AI vendor parent? “With genAI, the CIO is consciously taking an enormous risk, whether that is legal risk or privacy policy risks. It could result in a variety of outcomes that are unpredictable,” said Tony Fernandes, founder and CEO of user experience agency UEGroup. “If a person chooses not to disclose race, for example, but an AI is able to infer it and the company starts marketing on that basis, have they violated the privacy policy? That’s a big question that will probably need to be settled in court,” he said.



Quote for the day:

"Before you are a leader, success is all about growing yourself. When you become a leader, success is all about growing others" -- Jack Welch