Daily Tech Digest - September 20, 2024

The New Normal in Disaster Recovery: Preparing for Ransomware Attacks Takes a New Approach

Early detection of ransomware can be difficult due to sophisticated malware that operates stealthily, attacks occurring outside business hours, and the scale of large, complex networks. Rapid containment prevents further spread but requires quick decision-making to isolate systems without disrupting critical operations. Tracing the initial point of entry and identifying all compromised systems is complex and time-consuming but essential to prevent reinfection. Isolated recovery environments (IREs) or cleanrooms provide secure, isolated environments for data recovery and system rebuilding, designed to prevent reinfection during the recovery process. ... To protect against data loss, organizations of all types need to implement immutable and air-gapped backups using write-once-read-many (WORM) technology and physically or logically isolating backup systems from the main network. Increasing backup frequency and redundancy is also advised, along with diversifying backup storage and maintaining multiple versions of backups with appropriate retention policies.


Big Tech criticises EU’s AI regulation – is it justified?

An open letter singed by various Big Tech leaders – including Patrick Collison and Meta’s Mark Zuckerberg – claims Europe is becoming less competitive and innovative than other regions due to “inconsistent regulatory decision making”. This letter follows a report from former Italian prime minister Mario Draghi, which called for an annual spending boost of €800bn to prevent a “slow and agonising decline” economically. But the Big Tech warning also follows issues for these companies to train their AI models with the data of EU citizens using their services. ... But the letter also says the EU’s current regulation means the bloc risks missing out on “open” AI models and the latest “multimodal” models that can operate across text, images and speech. The letter says companies are going to invest heavily into AI models for European citizens, then they need “clear rules” that enable the use of European data. “But in recent times, regulatory decision making has become fragmented and unpredictable, while interventions by the European Data Protection Authorities have created huge uncertainty about what kinds of data can be used to train AI models,” the letter reads. 


Innovation: What is next?

Innovations in technology that prioritize environmental sustainability may offer potential solutions. However, the solution is not as straightforward as depending solely on temporary fixes and implementing a small number of innovative strategies. The analysis shows India’s green technology potential and innovation, particularly in wind, solar, geothermal, ocean, hydro, biomass, and waste energy. However, patenting activity has plateaued in recent years, indicating the need for a strategic approach to green technology innovation in India. ... Increasing private sector investment confidence and working with industry and universities can also make big changes. Moreover, through the strategic utilization of geo-political advantages and the establishment of a vibrant and cooperative environment, India has the potential to significantly advance its green technology industry and make substantial contributions to international endeavors aimed at addressing climate change, all the while promoting economic development. ... Further, deep-tech innovation and a focus on product creation in underserved markets can turn out to be a game changer for India. According to Nasscom, the start-up ecosystem will add 250 scale- ups in tech, logistics, automotive, fintech, and health tech by 2025.


What Lawyers Want You to Know About NFTs

"To avoid legal trouble, sellers of NFTs should make sure that they either own the copyright in the work of art associated with the NFT, or that they have the permission of the copyright owner to make and sell NFTs of the artwork,” says Tyler Ochoa, professor of law at Santa Clara University School of Law. “They should also avoid incorporating any other works of art or any trademarks that are owned by others. And if more than one person is involved in the project, such as an artist and an entrepreneur, they should clearly specify the rights and responsibilities of all parties to the project, and the division of any profits, in a signed, written agreement.” ... Trademark infringement is another significant concern. The Wright Law Firm’s Wright says as illustrated in Hermès Int'l v. Rothschild, the creation and sale of "MetaBirkins" NFTs, which depicted faux-fur versions of Hermès' Birkin handbags, led to claims of trademark infringement, trademark dilution, and cybersquatting. “[The Hermes Int’l v. Rothchild] case underscores the potential for NFTs to infringe on existing trademarks, especially when they replicate or closely imitate well-known brands without authorization,” says Wright. 


3 API Vulnerabilities Developers Accidentally Create

The problem with APIs isn’t so much that they’re hard to secure, but that they are prolific and developers prioritize other tasks to testing and securing APIs, she added. There are literally hundreds and thousands of API endpoints, so it’s not surprising things get missed. ... But it’s also an IT cultural problem that creates security problems. “At the end of the day, any developer is going to value breaking down their product backlog and their sprint backlog more than fixing vulnerabilities, because in the sprint, even in the waterfall model of software engineering, the functionality is on completing features to get a complete product,” Paxton-Fear said. “Fixing bugs isn’t given the same priority. And this is how things get forgotten.” Instead, there needs to be basic internal reviews where finding vulnerabilities is prioritized. And security can’t be the Department of No, because that ends up in conflict with developers instead of solving security problems. And IT organizations have to stop prioritizing speed over security. “While you can get a solution that can really help you manage it, if you don’t have the the teamwork and the culture around security, it’s going to fail, just like anything else will,” she said.


What is pretexting? Definition, examples, and attacks

There are two main elements to a pretext: a character, played by the scam artist; and a plausible situation, in which the character needs or has a right to specific information. For instance, because errors can arise with automatic payment systems, it’s plausible that a recurring bill payment we’ve set up might mysteriously fail, prompting the company we owe to reach out as a result. An attacker taking on the character of a helpful customer service rep reaching out to help us fix the error might ask for bank or credit card information as the scenario plays out to gain the information necessary to steal money from our accounts. ... Often lumped under the heading pretexting, tailgating is a common technique for getting through a locked door by simply following someone who can open it inside before it closes. It can be considered pretexting because the tailgater often adopts a persona that encourages the person with the key to let them into the building — for instance, by wearing a jumpsuit and claiming they’re there to fix the plumbing, or by carrying a pizza box they say must be delivered to another floor. 


Post-Digital Transformation: How to Evolve Beyond Initial Tech Adoption

Digital transformation often brings a cultural shift, as companies adopt new technologies that change how they operate. However, many organizations stop short of building a fully agile and adaptable culture. In a post-digital world, agility becomes a crucial differentiator. Technology is evolving faster than ever, and customer expectations are constantly changing. Businesses need to foster a culture where rapid experimentation, quick decision-making, and the ability to pivot are embedded in daily operations. This culture must extend across the entire organization, from leadership to frontline employees. To do this, companies can adopt agile methodologies, break down silos between departments and encourage cross-functional teams to collaborate. By creating an environment where employees are empowered to innovate and experiment without fear of failure, businesses can stay ahead of the curve. ... One of the most significant outcomes of digital transformation is the wealth of data that businesses now have access to. But collecting data is not enough—companies must be able to turn that data into actionable insights.


The AI Threat: Deepfake or Deep Fake? Unraveling the True Security Risks

AI-produced deepfakes and AI-improved phishing are a bigger problem. Deepfakes come in two varieties: voice and image/video; both of which are now rapidly improving commodity outputs from readily available gen-AI models – and neither of which is easy to detect by either humans or technology. ... The security industry is not waiting for the dam to break. There have been numerous new startups in 2024 all working on their own solution on how to detect AI and deepfake attacks, while existing firms have refocused on deepfake detection. Pindrop is an example of the latter. In July 2024, it raised $100 million in debt financing primarily to develop additional tools able to detect deepfake voice attacks. Deepfake voice is the easiest deepfake to produce, the most employed, and the easiest to detect. This is because there are subtle audible clues that a voice is not human generated that can be detected by technology if not by the human ear. The danger exists where that detection technology is not being used. The same can be said for the current generation of AI-enhanced polymorphic malware detection systems: they can work, but only where they are being used.


Traditional CX on Deathbed as AI Agents Thrive

AI agents are an indispensable part of modern CX strategies, enabling real-time personalization, proactive engagement and outcome tracking. This shift toward automation is key to reducing operational costs as AI agents are made to handle tasks such as ticket routing, knowledge base management and first-contact resolutions. Eighty-six percent of CX leaders predicted that CX will be "utterly transformed" over the next three years. Human agents will be able to pick complex conversations from an AI agent, who will already have the details regarding the issue, and the customer will no longer need to repeat themselves. AI will instead act as their copilot, shifting human roles toward "expertise-based work, away from routine tasks." Recognizing the evolving trend, Salesforce, a leader in AI integration, has introduced Agentforce, a "proactive, autonomous application that provides specialized, always-on support to employees or customers." Agentforce uses machine learning to deploy autonomous bots for routine customer service tasks. With AI agents, the company aligns its customer service efforts with business outcomes such as increased sales conversions or customer retention, which is directly tied to pricing.


Striking the balance between cybersecurity and operational efficiency

Security supports the business, the controls are aligned and make perfect sense, their implementation is smooth, they are behind the scenes, and you can always get help quickly. In case of an accident, you can move to either the left, or the right, so you actually have more options than on any of the other lanes, so this is quite flexible as well. You can see where I am going with this, right? Similarly you need to be flexible with your cybersecurity strategy – develop your long term strategy, and start executing it – but use tactics to do so – when it aligns well with a business opportunity, the chances to succeed are far greater than when to do so during the middle of a business disruption. Learn to leverage the upcoming situations as great opportunities for your long-term advancement of the security strategy. ... It is important to understand that there are plenty such frameworks, and guidelines – just imagine in a short blast: ISO27XXX, NIST-800-XXX, NIST CSF, CIS, COBIT, COSO, ITIL, PCI, OWASP, plus a plethora of others, plus all the regulations. Further, the majority of these frameworks are quite similar when you actually break them down, with quite some overlap, but also serious gaps otherwise. 



Quote for the day:

"The mediocre leader tells. The good leader explains. The superior leader demonstrates. The great leader inspires." -- Gary Patton

Daily Tech Digest - September 19, 2024

AI, Software Architecture and New Kinds of Products

Although AI won’t change the practice of software architecture, AI will make a big change in what software architects architect. The first generation of AI-enabled applications will be similar to what we have now. For example, integrating generative AI into word processing and spreadsheet applications (as Microsoft and Google are doing) or tools for AI-assisted programming (like GitHub Copilot and others). But before long, we will be building different kinds of software. ... Architects will also play a role in evaluating an AI’s performance. Evals determine whether the application’s performance is acceptable. But what does “acceptable” mean in the application’s context? That’s an architectural question. In an autonomous vehicle, misidentifying a pedestrian isn’t acceptable; picking a suboptimal route is tolerable. In a recommendation engine, poor recommendations aren’t a problem as long as a reasonable number are good. What’s “reasonable”? That’s an architectural question. Evals also give us our first glimpses of what running the application in production will be like. Is the performance acceptable? Is the cost of running it acceptable? 


The AI Power Paradox

Wellise notes that AI technologies may also help data centers to manage their energy consumption by monitoring environmental conditions and adjusting use of resources accordingly. “In one of our Frankfurt data centers, we deployed the use of AI to create digital twins where we model data associated with climate parameters,” he explains. AI can also help tech companies that operate data centers in different areas to allocate their resources according to the availability of renewables. If it is sunny in California and solar energy is available to a data center there, models can ramp up their training at that location and ramp it down in cloudy Virginia, Demeo says. “Just because they’re there doesn't mean they have to run at full capacity.” Data center customers, too, can have an impact. They can stipulate that they will only use a data center’s services under certain circumstances. “They will use your data center only until a certain price. Beyond that, they will not use it,” Chaudhuri relates. Though application of even the most moderate of these technologies is not yet widespread, advocates claim that these experimental setups may eventually be more widely applicable.


Quantinuum Unveils First Contribution Toward Responsible AI

This research has significant implications for the future of AI and quantum computing. One of the most notable outcomes is the potential to use quantum AI for interpretable models. In current large language models (LLMs) like GPT-4, decisions are often made in a “black box” fashion, making it difficult for researchers to understand how or why certain outputs are generated. In contrast, the QDisCoCirc model allows researchers to inspect the internal quantum states and the relationships between different words or sentences, providing insights into the decision-making process. In practical terms, this could have wide-reaching applications in areas such as question answering systems, also referred to as ‘classification’ challenges, where understanding how a machine reaches a conclusion is as important as the answer itself. By offering an interpretable approach, quantum AI using compositional methods, could be applied in fields like legal, medical, and financial sectors, where accountability and transparency in AI systems are critical. The study also showed that compositional generalization—the ability of the model to generalize from smaller training sets to larger and more complex inputs—was successful.


Differential privacy in AI: A solution creating more problems for developers?

Differential privacy protects personal data by adding random noise, making it harder to identify individuals while preserving the dataset. The fundamental concept revolves around a parameter, epsilon (ε), which acts as a privacy knob. A lower epsilon value results in stronger privacy but adds more noise, which in turn reduces the usefulness of the data. A developer at a major fintech company recently voiced frustration over differential privacy’s effect on their fraud detection system, which needs to detect tiny anomalies in transaction data. “When noise is added to protect user data,” they explained, “those subtle signals disappear, making our model far less effective.” Fraud detection thrives on spotting minute deviations, and differential privacy easily masks these critical details. The stakes are even higher in healthcare. For instance, AI models used for breast cancer detection rely on fine patterns in medical images. Adding noise to protect privacy can blur these patterns, potentially leading to misdiagnoses. This isn’t just a technical inconvenience—it can put lives at risk.


Thinking of building your own AI agents? Don’t do it, advisors say

Large companies may be tempted to roll their own highly customized agents, he says, but they can get tripped up by fragmented internal data, by underestimating the resources needed, and by lacking in-house expertise. “While some companies may achieve success, it’s common for these projects to spiral out of control in terms of cost and complexity,” Ackerson says. “In many cases, buying a solution from a trusted partner can help organizations avoid the pitfalls of builder’s remorse and accelerate their path to success.” AlphaSense has trained its own AI agents, but many companies lack internal expertise, he says. In addition, organizations may project the development costs but ignore the cost of ongoing maintenance, he adds. “This is the largest cost, as maintaining AI systems over time can be complex and resource-intensive, requiring constant updates, monitoring, and optimization to ensure long-term functionality,” Ackerson says. Partnering with an AI provider can give companies access to proven, ready-made agents that have been tested and refined by thousands of users, he contends. “It’s faster to implement, less resource-intensive, and comes with the added benefit of ongoing updates and support — freeing companies to focus on other critical areas of their business,” he says.


Building an Enterprise Data Strategy: What, Why, How

After completing the assessment of your current data management efforts and defining your objectives and priorities, you can begin to assemble the data governance framework by defining roles, responsibilities, and procedures for the entire data lifecycle. This includes data ownership, access controls, security, and compliance as well as data consistency, accountability, and integrity. The next step is to establish the processes and tools used to manage data quality, which include data profiling, cleansing, standardization, and validation. Determine the mechanisms for integrating data to create a seamless and coherent data environment encompassing the entire enterprise. Data lifecycle management covers data retention policies, archival storage, and data purging to ensure efficient storage management. The glue that keeps the many moving pieces of an enterprise data strategy working together harmoniously is your company’s culture of data literacy and empowerment. Employees and managers must be trained to recognize the value of data to the organization, and the importance of maintaining its quality and security. 


Beware the Great AI Bubble Popping

This does not mean that the technology will never make money. Early stages of evolution in any tech usually involve trying products in the market by making them as accessible as possible and monetizing the solutions when there's clarity on use cases, sizable adoption, dependency and demand. Generative AI will take a while longer to get there. The Great Popping will also lead to the ecosystem thinning. Startups with speculative or unsustainable business models will shutter shop as funding decreases. The most likely future scenario is that the AI landscape will shift to make room for a small number of long-term players that focus on practical applications, while the rest go bust. Despite sharing similarities with the dot-com bubble, the residue of the AI one will likely differ in that entire companies, especially the OpenAIs and the Anthropics, won't likely shutter completely. They may close down money-guzzling units, rejigger focus or even pivot entirely, but they are unlikely to vanish off the face of the earth as their dot-com counterparts did. Job losses are a likely inevitability, and few firms will hire the laid-off employees. 


Why Jensen Huang and Marc Benioff see ‘gigantic’ opportunity for agentic AI

In the future, Huang noted, there will be AI agents that understand subtleties and that can reason and collaborate. They’ll be able to find other agents to “work together, assemble together,” while also talking to humans and soliciting feedback to improve their dialogue and outputs. Some will be “excellent” at particular skills, while others will be more general purpose, he noted. “We’ll have agents working with agents, agents working with us,” said Huang. “We’re going to supercharge the ever-loving daylights of our company. We’re going to come to work and a bunch of work we didn’t even realize needed to be done will be done.” Adoption needs to be demystified, he and Benioff agreed, with Huang noting that “it’s going to be a lot more like onboarding employees.” Benioff, for his part, underscored the importance of people being able to “actually understand” how they work and their purpose, and “need to get their hands in the soil.” ... Huang pointed out that the challenges we have in front of us are “many.” Some of these include fine-tuning and guardrailing, but scientists are making advancements in these areas every day. 


Navigating a Security Incident – Best Practices for Engaging Service Providers

Organizations experiencing a security incident must grapple with numerous competing issues simultaneously, usually under a very tight timeframe and the pressure of significant business disruption. Engaging qualified service providers is often critical to successfully resolving and minimizing the fall-out of the incident. These providers include forensic firms, public relations firms, restoration experts, and notification and call center vendors. Due to the nature of these services, they can have access to or even generate additional personal and sensitive information relevant to the incident. Protecting this information from third party or unauthorized disclosures during litigation, discovery, or otherwise, via the application of attorney-client privilege and the work product doctrine is essential. While there is no bright-line, uniform rule about how and under what circumstances these privileges attach to forensic reports and other information prepared by incident response providers, recent case law offers guidance as to how organizations can maximize the prospect that their assessments will remain shielded by the work product doctrine and/or the attorney-client privilege.


AWS claims customers are packing bags and heading back on-prem

You read that correctly – customers are finding that moving their IT back on-premises is so attractive compared with remaining on AWS that they are prepared to do this despite the significant effort involved. Hardly a resounding endorsement of the benefits of the cloud. AWS also says that customers may switch back to on-premises for a number of reasons, including "to reallocate their own internal finances, adjust their access to technology and increase the ownership of their resources, data and security." In fact, there have been a growing number of cases of companies moving some or even all their workloads back from the cloud – so-called cloud repatriation – and cost often seems to be a factor. ... Andrew Buss, IDC senior research director for EMEA, told The Register that while cloud repatriation is becoming more common, "we'd put the share of companies actively repatriating public cloud workloads in the single digit percentage sphere." Organizations are more likely to move to another public cloud provider if the incumbent is not meeting their needs, he said, and they have got more used to the cost economics of public cloud and can compare it to the long-term costs of running private IT infrastructure.



Quote for the day:

"Without initiative, leaders are simply workers in leadership positions." -- Bo Bennett

Daily Tech Digest - September 18, 2024

Putting Threat Modeling Into Practice: A Guide for Business Leaders

One of the primary benefits of threat modeling is its ability to reduce the number of defects that make it to production. By identifying potential threats and vulnerabilities during the design phase, companies can implement security measures that prevent these issues from ever reaching the production environment. This proactive approach not only improves the quality of products but also reduces the costs associated with post-production fixes and patches. ... Along similar lines, threat modeling can help meet obligations defined in contracts if those contracts include terms related to risk identification and management. ... Beyond obligations linked to compliance and contracts, many businesses also establish internal IT security goals. For example, they might seek to configure access controls based on the principle of least privilege or enforce zero-trust policies on their networks. Threat modeling can help to put these policies into practice by allowing organizations to identify where their risks actually lie. From this perspective, threat modeling is a practice that the IT organization can embrace because it helps achieve larger goals – namely, those related to internal governance and security strategy.


How Cloud Custodian conquered cloud resource management

Everybody knows the cloud bill is basically rate multiplied by usage. But while most enterprises have a handle on rate, usage is the hard part. You have different application teams provisioning infrastructure. You go through code reviews. Then when you get to five to 10 applications, you get past the point where anyone can possibly know all the components. Now you have containerized workloads on top of more complex microservices architectures. And you want to be able to allow a combination of cathedral (control) and bazaar (freedom of technology choice) governance, especially today with AI and all of the new frameworks and LLMs [large language models]. At a certain point you lose the script to be able to follow all of this in your head. There are a lot of tools to enable that understanding — architectural views, network service maps, monitoring tools — all feeling out different parts of the elephant versus giving an organization a holistic view. They need to know not only what’s in their cloud environment, but what’s being used, what’s conforming to policy, and what needs to be fixed, and how. That’s what Cloud Custodian is for — so you can define the organizational requirements of your applications and map those up against cloud resources as policy.


5 Steps to Identify and Address Incident Response Gaps

To compress the time it takes to address an incident, it’s not enough to stick to the traditional eyes-on-glass model that network operations centers (NOCs) traditionally privilege. It’s too human-intensive and error-prone to effectively triage an increasingly overwhelming volume of data. To go from event to resolution with minimal toil and increased speed, teams can leverage AI and automation to deflect noise, surface only the most critical alerts and automate diagnostics and remediations. Generative AI can amplify that effect: For teams collaborating in ChatOps tools, common diagnostic questions can be used as prompts to get context and accelerate action. ... When an incident hits, teams spend too much time gathering information and looping in numerous people to tackle it. Generative AI can be used to quickly summarize key data about the incident and provide actionable insights at every step of the incident life cycle. It can also supercharge the ability to develop and deploy automation jobs faster, even by non-technical teams: Operators can translate conversational prompts into proposed runbook automation or leverage pre-engineered prompts based on common categories.


DevOps with OpenShift Pipelines and OpenShift GitOps

Unlike some other CI solutions, such as legacy tool Jenkins, Pipelines is built on native Kubernetes technologies and thus is resource efficient since pipelines and tasks are only actively running when needed. Once the pipeline has completed no resources are consumed by the pipeline itself. Pipelines and tasks are constructed using a declarative approach following standard Kubernetes practices. However, OpenShift Pipelines includes a user-friendly interface built into the OpenShift console that enables users to easily monitor the execution of the pipelines and view task logs as needed. The user interface also shows metrics for individual task execution, enabling users to better optimize pipeline performance. In addition, the user interface enables users to quickly create and modify pipelines visually. While users are encouraged to store tasks and Pipeline resources in Git, the ability to visually create and modify pipelines greatly reduces the learning curve and makes the technology approachable for new users. You can leverage pipelines-as-code to provide an experience that is tightly integrated with your backend Git provider, such as GitHub or GitLab. 


Rethinking enterprise architects’ roles for agile transformation

Mounting technical debt and extending the life of legacy systems are key risks CIOs should be paranoid about. The question is, how should CIOs assign ownership to this problem, require that technical debt’s risks are categorized, and ensure there’s a roadmap for implementing remediations? One solution is to assign the responsibility to enterprise architects in a product management capacity. Product managers must define a vision statement that aligns with strategic and end-user needs, propose prioritized roadmaps, and oversee an agile backlog for agile delivery teams. ... Enterprise architects who have a software development background are ideal candidates to assume the delivery leader role and can steer teams toward developing platforms with baked-in security, performance, usability, and other best practices. ... Enterprise architects assuming a sponsorship role in these initiatives can help steer them toward force-multiplying transformations that reduce risks and provide additional benefits in improved experiences and better decision-making. CIOs who want enterprise architects to act as sponsors should provide them with a budget and oversee the development of a charter for managing investment priorities.


The best way to regulate AI might be not to specifically regulate AI. This is why

Most of the potential uses of AI are already covered by existing rules and regulations designed to do things such as protect consumers, protect privacy and outlaw discrimination. These laws are far from perfect, but where they are not perfect the best approach is to fix or extend them rather than introduce special extra rules for AI. AI can certainly raise challenges for the laws we have – for example, by making it easier to mislead consumers or to apply algorithms that help businesses to collude on prices. ... Finally, there’s a lot to be said for becoming an international “regulation taker”. Other jurisdictions such as the European Union are leading the way in designing AI-specific regulations. Product developers worldwide, including those in Australia, will need to meet those new rules if they want to access the EU and those other big markets. If Australia developed its own idiosyncratic AI-specific rules, developers might ignore our relatively small market and go elsewhere. This means that, in those limited situations where AI-specific regulation is needed, the starting point ought to be the overseas rules that already exist. There’s an advantage in being a late or last mover. 


How LLMs on the Edge Could Help Solve the AI Data Center Problem

Anyone interacting with an LLM in the cloud is potentially exposing the organization to privacy questions and the potential for a cybersecurity breach. As more queries and prompts are being done outside the enterprise, there are going to be questions about who has access to that data. After all, users are asking AI systems all sorts of questions about their health, finances, and businesses. To do so, these users often enter personally identifiable information (PII), sensitive healthcare data, customer information, or even corporate secrets. The move toward smaller LLMs that can either be contained within the enterprise data center – and thus not running in the cloud – or that can run on local devices is a way to bypass many of the ongoing security and privacy concerns posed by broad usage of LLMs such as ChatGPT. ... Pruning the models to reach a more manageable number of parameters is one obvious way to make them more feasible on the edge. Further, developers are shifting the GenAI model from the GPU to the CPU, reducing the processing footprint, and building standards for compiling. As well as the smartphone applications noted above, the use cases that lead the way will be those that are achievable despite limited connectivity and bandwidth, according to Goetz.


'Good complexity' can make hospital networks more cybersecure

Because complicated systems have structures, Tanriverdi says, it's difficult but feasible to predict and control what they'll do. That's not feasible for complex systems, with their unstructured connections. Tanriverdi found that as health care systems got more complex, they became more vulnerable. ... The problem, he says, is that such systems offer more data transfer points for hackers to attack, and more opportunities for human users to make security errors. He found similar vulnerabilities with other forms of complexity, including:Many different types of medical services handling health data. Decentralizing strategic decisions to member hospitals instead of making them at the corporate center. The researchers also proposed a solution: building enterprise-wide data governance platforms, such as centralized data warehouses, to manage data sharing among diverse systems. Such platforms would convert dissimilar data types into common ones, structure data flows, and standardize security configurations. "They would transform a complex system into a complicated system," he says. By simplifying the system, they would further lower its level of complication.


Threats by Remote Execution and Activating Sleeper Devices in the Context of IoT and Connected Devices

As the Internet of Things proliferates, the number of connected devices in both civilian and military contexts is increasing exponentially. From smart homes to military-grade equipment, the IoT ecosystem connects billions of devices, all of which can potentially be exploited by adversaries. The pagers in the Hezbollah case, though low-tech compared to modern IoT devices, represent the vulnerability of a system where devices are remotely controllable. In the IoT realm, the stakes are even higher, as everyday devices like smart thermostats, security cameras, and industrial equipment are interconnected and potentially exploitable. In a modern context, this vulnerability could be magnified when applied to smart cities, critical infrastructure, and defense systems. If devices such as power grids, water systems, or transportation networks are connected to the internet, they could be subjected to remote control by malicious actors. ... One of the most alarming aspects of this situation is the suspected infiltration of the supply chain. The pagers used by Hezbollah were reportedly tampered with before being delivered to the group, likely with explosives embedded within the devices.


Detecting vulnerable code in software dependencies is more complex than it seems

A “phantom dependency” refers to a package used in your code that isn’t declared in the manifest. This concept is not unique to any one language (it’s common in JavaScript, NodeJS, and Python). This is problematic because you can’t secure what you can’t see. Traditional SCA solutions focus on manifest files to identify all application dependencies, but those can both be under- or over-representative of the dependencies actually used by the application. They can be under-representative if the analysis starts from a manifest file that only contains a subset of dependencies, e.g., when additional dependencies are installed in a manual, scripted or dynamic fashion. This can happen in Python ML/AI applications, for example, where the choice of packages and versions often depend on operating systems or hardware architectures, which cannot be fully expressed by dependency constraints in manifest files. And they are over-representative if they contain dependencies not actually used. This happens, for example, if you dump the names of all the components contained in a bloated runtime environment into a manifest file



Quote for the day:

"An accountant makes you aware but a leader makes you accountable." -- Henry Cloud

Daily Tech Digest - September 17, 2024

Dedicated Cloud: What It’s For and How It’s Different From Public Cloud

While dedicated cloud services give you a level of architectural control you will not get from public clouds, using them comes with trade-offs, the biggest one being the amount of infrastructure engineering ability needed. But if your team has concluded that a public cloud isn’t a good fit, you probably know that already and have at least some of that ability on hand. ... Ultimately, dedicated cloud is about keeping control and giving yourself options. You can quickly deploy different combinations of resources, interconnecting dedicated infrastructure with public cloud services, and keep fine-tuning and refining as you go. You get full control of your data and your architecture with the freedom to change your mind. The trade-off is that you must be ready to roll up your sleeves and manage operating systems, deploy storage servers, tinker with traffic routing and do whatever else you need to do to get your architecture just right. But again, if you already know that you need more knobs than you can turn using a typical public cloud provider, you are probably ready anyway.


Building a More Sustainable Data Center: Challenges and Opportunities in the AI Era

Sustainability is not just a compliance exercise on reducing the negative impact on the environment, it also can bring financial benefits to an organization. According to Gartner’s Unlock the Business Benefits of Sustainable IT Infrastructure report, “[Infrastructure and operations’] contribution to sustainability strategies tends to focus on environmental impact, but sustainability also can have a significant positive impact on non-environmental factors, such as brand, innovation, resilience and attracting talent.” As a result, boards should embrace the financial opportunities of companies’ Environmental, Sustainability, and Governance (ESG) compliance rather than consider it just another unavoidable compliance expense without a discernable return on investment (ROI). ... To improve data center resilience, Gartner recommends that organizations expand use of renewable energy using a long-term power purchase agreement to contain costs, generate their own power where feasible, and reuse and redeploy equipment as much as possible to maximize the value of the resource.


Data Business Evaluation

Why data businesses? Because they can be phenomenal businesses with extremely high gross margins — as good or better than software-as-a-service (SaaS). Often data businesses can be the best businesses within the industries that they serve. ... Data aggregation can be a valuable way to assemble a data asset as well, but the value typically hinges on the difficulty of assembling the data…if it is too easy to do, others will do it as well and create price competition. Often the value comes in aggregating a long tail of data that is costly to do more than once either for the suppliers or a competitive aggregator. ... The most stable data businesses tend to employ a subscription business model in which customers subscribe to a data set for an extended period of time. Subscriptions models are clearly better when the subscriptions are long term or, at least, auto-renewing. Not surprisingly, the best data businesses are generally syndicated subscription models. On the other end, custom data businesses that produce data for clients in a one-off or project-based manner generally struggle to attain high margins and predictability, but can be solid businesses if the data manufacturing processes are optimized 


Leveraging AI for water management

AI is reshaping the landscape of water management by providing predictive insights, optimising operations, and enabling real-time decision-making. One of AI’s key contributions is its ability to forecast water usage patterns. AI models can accurately predict water demand by analysing historical data and considering variables like weather conditions, population trends, and industrial activities. This helps water utilities allocate resources more effectively, minimising waste while ensuring consistent supply to communities. Water utilities can also integrate AI systems to monitor and optimise their supply networks. ... One of the most critical applications of AI is in water quality monitoring. Traditional methods of detecting water contaminants are labour-intensive and involve periodic testing, which can result in delayed responses to contamination events. AI, on the other hand, can process continuous data streams from IoT-enabled sensors installed in water distribution systems. These sensors monitor variables like pH levels, temperature, and turbidity, detecting changes in water quality in real time. AI algorithms analyse the data, triggering immediate alerts when contaminants or irregularities are detected.


History of Cybersecurity: Key Changes Since the 1990s and Lessons for Today

Most cyber attackers hadn’t considered using the internet to pursue financial gain or cause serious harm to organizations. To be sure, financial crimes based on computer hacking took place in the '90s and early 2000s. But they didn't dominate the news in an endless stream of cautionary tales, and most people thought the 1995 movie Hackers was a realistic depiction of how hacking worked. ... By the mid-2000s, however, internet-based attacks became more harmful and frequent. This was the era when threat actors realized they could build massive botnets and then use them to distribute spam or send scam emails. These attacks could have caused real financial harm, but they weren't exactly original types of criminal activity. They merely conducted traditional criminal activity, like scams, using a new medium: the internet. ... The 2010s were also a time of massive technological change. The advent of cloud computing, widespread adoption of mobile devices, and rollout of Internet of Things (IoT) hardware meant businesses could no longer define clear network perimeters or ensure that sensitive data always remained in their data centers. 


Gateways to havoc: Overprivileged dormant service accounts

Dormant accounts go unnoticed, leaving organizations unaware of their access privileges, the systems they connect to, how to access them, and even of their purpose of existence. Their elevated privileges, lax security measures, and invisibility, make dormant service accounts prime targets for infiltration. By compromising such an account, attackers can gain significant access to systems and sensitive data, often without raising immediate suspicion for extended periods of time. During that time, cyber criminals can elevate privileges, exfiltrate data, disrupt operations, and install malware and backdoors, causing total mayhem completely undetected until it’s too late. The weaknesses that plague dormant accounts make them open doors into an organization’s system. If compromised, an overprivileged dormant account can give way to sensitive data such as customer PII, PHI, intellectual property, and financial records, leading to costly and damaging data breaches. Even without being breached, dormant accounts are significant liabilities, potentially causing operational disruptions and regulatory compliance violations.


Overcoming AI hallucinations with RAG and knowledge graphs

One challenge that has come up in deploying RAG into production environments is that it does not handle searches across lots of documents that contain similar or identical information. When these files are chunked and turned into vector embeddings, each one will have its data available for searching. When each of those files has very similar chunks, finding the right data to match that request is harder. RAG can also struggle when the answer to a query exists across a number of documents that cross reference each other. RAG is not aware of the relationships between these documents. ... Rather than storing data in rows and columns for traditional searches, or as embeddings for vector search, a knowledge graph represents data points as nodes and edges. A node will be a distinct fact or characteristic, and edges will connect all the nodes that have relevant relationships to that fact. In the example of a product catalog, the nodes may be the individual products while the edges will be similar characteristics that each of those products possess, like size or color.


Preparing for the next big cyber threat

In addressing emerging threats, CISOs will have to incorporate controls to counter adversarial AI tactics and foster synergies with data and AI governance teams. Controls to ensure quantum-resistant cryptography in the symmetric space to future-proof encrypted data and transmissions will also be put in place if they are not already. Many organizations — including banks — are already enforcing the use of quantum-resistant cryptography, for instance, with the use of the Advanced Encryption Standard (AES)-256 algorithm because data encrypted by it is not vulnerable to cracking by quantum computers. Zero trust as a mindset and approach will be very important, especially in addressing insecure design components of OT environments used in Industry 4.0. Therefore, one of the key areas of strengthening protection would also be identity and access management (IAM). ... As part of strong cyber resilience, we need sound IR playbooks to effectively draw bridges, we need plan Bs and plan Cs, business continuities as well as table-tops and red teams that involve our supply chain vendors. And finally, response to the ever-evolving threat landscape will entail greater adaptability and agility.


The Impact of AI on The Ethernet Switch Market

Enterprises investing in new infrastructure to support AI will have to choose which technology is best for their particular needs. InfiniBand and Ethernet will likely continue to coexist for the foreseeable future. It’s highly likely that Ethernet will remain dominant in most network environments while InfiniBand will retain its foothold in high-performance computing and specialized AI workloads. ... While InfiniBand has several very strong advantages, advances in Ethernet are quickly closing the gap, making its ubiquity likely to continue. There are multiple other reasons that enterprises are likely to stick with Ethernet, too, such as lower cost, existing in-house talent, prolific integrations with existing infrastructures, and compatibility with legacy applications, among others. ... The Ultra Ethernet Consortium is proactively working to extend Ethernet's life to ensure it remains useful and cost-effective for both current and future technologies. The aim is primarily to reduce the need for drastic shifts to alternative solutions that may constitute heavy lifts and costs in adapting existing networks. 


Making the Complex Simple: Authorization for the Modern Enterprise

Modernizing legacy authorization systems is essential for organizations to enhance security and support their growth and innovation. Modernizing and automating operations allows organizations to overcome the limitations of legacy systems, enhance the protection of sensitive information and stay competitive in today’s digital landscape. Simplifying access control and automating workflows to modernize and optimize operations greatly increases productivity and lowers administrative burdens. Organizations can direct important resources toward more strategic endeavors by automating repetitive operations, which increases output and promotes an agile corporate environment. This change improves operational efficiency and puts businesses in a better position to adapt to changing market demands. Enhancing security is another critical benefit of modernizing authorization systems. Centralized management coupled with advanced role-based access control (RBAC) strengthens an organization’s security posture by preventing unauthorized access. Centralized systems allow for efficient user permissions management, ensuring that only authorized individuals can access sensitive information. 



Quote for the day:

"Motivation will almost always beat mere talent." -- Ralph Augustine Norman

Daily Tech Digest - September 16, 2024

AI Ethics – Part I: Guiding Principles for Enterprise

The world has now caught up to what was previously science fiction. We are now designing AI that is in some ways far more advanced than anything Isaac Asimov could have imagined, while at the same time being far more limited. Even though they were originally conceived as fictional principles, there have been efforts to adapt and enhance Isaac Asimov’s Three Laws of Robotics to fit modern enterprise AI-based solutions. Here are some notable examples: Human-Centric AI Principles - Modern AI ethics frameworks often emphasize human safety and well-being, echoing Asimov’s First Law. ... Ethical AI Guidelines - Enterprises are increasingly developing ethical guidelines for AI that align with Asimov’s Second Law. These guidelines ensure that AI systems obey human instructions while prioritizing ethical considerations. ... Bias Mitigation and Fairness - In line with Asimov’s Third Law, there is a strong focus on protecting the integrity of AI systems. This includes efforts to mitigate biases and ensure fairness in AI outputs. ... Enhanced Ethical Frameworks - Some modern adaptations include additional principles, such as the “Zeroth Law,” which prioritizes humanity’s overall well-being. 


Power of Neurodiversity: Why Software Needs a Revolution

Neurodiversity, which includes ADHD, autism spectrum disorder, and dyslexia, presents unique challenges for individuals, yet it also comes with many unique strengths. People on the autism spectrum often excel in logical thinking, while individuals with ADHD can demonstrate exceptional attention to detail when engaged in areas of interest. Those with dyslexia frequently display creative thinking skills. However, software design often fails to accommodate neurodiverse users. For example, websites or apps with cluttered interfaces can overwhelm users with ADHD, while those sites that rely heavily on text make it harder for individuals with dyslexia to process information. Additionally, certain sounds or colors, such as bright colors, may be overwhelming for someone with autism. Users do not have to adapt to poorly designed software. Instead, software designers must create products designed to meet these user needs. Waiting to receive software accessibility training on the job may be too late, as software designers and developers will need to relearn foundational skills. Moreover, accessibility still does not seem to be a priority in the workplace, with most job postings for relevant positions not requiring these skills.


Protect Your Codebase: The Importance of Provenance

When you know that provenance is a vector for a software supply chain attack, you can take action to protect it. The first step is to collect the provenance data for your dependencies, where it exists; projects that meet SLSA level 1 or higher produce provenance data you can inspect and verify. Ensure that trusted identities generate provenance. If you can prove that provenance data came from a system you own and secured or from a known good actor, it’s easier to trust. Cryptographic signing of provenance records provides assurance that the record was produced by a verifiable entity — either a person or a system with the appropriate cryptographic key. Store provenance data in a write-once repository. This allows you to verify later if any provenance data was modified. Modification, whether malicious or accidental, is a warning sign that your dependencies have been tampered with somehow. It’s also important to protect the provenance you produce for yourself and any downstream users. Implement strict access and authentication controls to ensure only authorized users can modify provenance records. 


Are You Technical or Non-Technical? Time to Reframe the Discussion

The term “technical” can introduce bias into hiring and career development, potentially leading to decisions swayed more by perception than by a candidate’s qualifications. Here, hiring decisions can sometimes reflect personal biases if candidates do not fit a stereotypical image or lack certain qualifications not essential for the role. For instance, a candidate might be viewed as not technical enough if they lack server administration experience, even when the job primarily involves software development. Unconscious bias can skew evaluations, leading to decisions based more on perceptions than actual skills. To address this issue, it is important to clearly define the skills required for a position. For example, rather than broadly labeling a candidate as “not technical enough,” it is more effective to specify areas for improvement, such as “needs advanced database management skills.” This approach not only highlights areas where candidates excel, such as developing user-centric reports, but also clarifies specific shortcomings. Clearly stating requirements, such as “requires experience building scalable applications with technology Y,” enhances the transparency and objectivity of the hiring process.


Will Future AI Demands Derail Sustainable Energy Initiatives?

The single biggest thing enterprises are doing to address energy concerns is moving toward more energy efficient second-generation chips, says Duncan Stewart, a research director with advisory firm Deloitte Technology, via email. "These chips are a bit faster at accelerating training and inference -- about 25% better than first-gen chips -- and their efficiency is almost triple that of first-generation chips." He adds that almost every chipmaker is now targeting efficiency as the most important chip feature In the meantime, developers will continue to play a key role in optimizing AI energy needs, as well as validating whether AI is even required to achieve a particular outcome. "For example, do we need to use a large language model that requires lots of computing power to generate an answer from enormous data sets, or can we use more narrow and applied techniques, like predictive models that require much less computing because they’ve been trained on much more specific and relevant data sets?" Warburton asks. "Can we utilize compute instances that are powered by low-carbon electricity sources?


When your cloud strategy is ‘it depends’

As for their use of private cloud, some of the rationale is purely a cost calculation. For some workloads, it’s cheaper to run on premises. “The cloud is not cheaper. That’s a myth,” one of the IT execs told me, while acknowledging cost wasn’t their primary reason for embracing cloud anyway. I’ve been noting this for well over a decade. Convenience, not cost, tends to drive cloud spend—and leads to a great deal of cloud sprawl, as Osterman Research has found. ... You want developers, architects, and others to feel confident with new technology. You want to turn them into allies, not holdouts. Jassy declared, “Most of the big initial challenges of transforming the cloud are not technical” but rather “about leadership—executive leadership.” That’s only half true. It’s true that developers thrive when they have executive air cover. This support makes it easier for them to embrace a future they likely already want. But they also need that executive support to include time and resources to learn the technologies and techniques necessary for executing that new direction. If you want your company to embrace new directions faster, whether cloud or AI or whatever it may be, make it safe for them to learn. 


4 steps to shift from outputs to outcomes

Shifting the focus to outcomes — business results aligned with strategic goals — was the key to unlocking value. David had to teach his teams to see the bigger picture of their business impact. By doing this, every project became a lever to achieve revenue growth, cost savings, and customer satisfaction, rather than just another task list. Simply being busy doesn’t mean a project is successful in delivering business value, yet many teams proudly wear busy badges, leaving executives wondering why results aren’t materializing. Busy doesn’t equal productive. In fact, busy gets in the way of being productive. ... A common issue is project teams lose sight of how their work aligns with the company’s broader goals. When David took over, his teams were still disconnected from those strategic objectives, but by revisiting them and ensuring that every project directly supported those goals, the teams could finally see they were part of something much larger than just a list of tasks. Many business leaders think their teams are mind readers. They hold a town hall, send out a slide deck, and then expect everyone to get it. But months later, they’re surprised when the strategy starts slipping through their fingers.


Is Your Business Ready For The Inevitable Cyberattack?

Cybersecurity threats are inevitable, making it essential for businesses to prepare for the worst. The critical question is: if your business is hacked, is your data protected, and can you recover it in hours rather than days or weeks? If not, you are leaving your business vulnerable to severe disruptions. While everyone emphasises the importance of backups, the real challenge lies in ensuring their integrity and recoverability. Are your backups clean? Can you quickly restore data without prolonged downtime? The total cost of ownership (TCO) of your data protection strategy over time is a crucial consideration. Traditional methods, such as relying on Iron Mountain for physical backups, are cumbersome and time-bound, requiring significant effort to locate and restore data. ... The story of data storage, much like the shift to cloud computing, revolves around strategically placing the right parts of your business operations in the most suitable locations at the right times. Data protection follows the same principle. Resilience is still a topic of frequent discussion, yet its broad nature makes it challenging to establish a clear set of best practices.


Digital twin in the hospitality industry-Innovations in planning & designing a hotel

The Metaverse is revolutionising how it became a factual virtual reality tour of rooms and services experienced by guests during their visit, for which the guest is provided the chance to preview before booking. Moreover, hotels can provide tailored virtual experiences through interactive concierge services and bespoke room décor options. More events will be held through immersive games and entertaining interactivity, bringing better visitor experiences to the hospitality industry. It can generate revenue through tickets, sponsorships, and virtual item sales. ... Operational efficiency is the bottom line of hospitality, where everything seems small but matters so much for guest satisfaction. Imagine the case where the HVAC system of a hotel or its lighting is controlled by some model of a digital twin. Managers will thus understand the energy consumption patterns and predict what will require maintenance so they can change those settings accordingly based on real-time data. Digital twins enable staff and resources to be trained better. Staff can be comfortable with changes in procedures and layout beforehand by interacting with the virtual model. 


The cybersecurity paradigm shift: AI is necessitating the need to fight fire with fire

Organisations should be prepared for the worst-case scenario of a cyber-attack to establish cyber resilience. This involves being able to protect and secure data, detect cyber threats and attacks, and respond with automated data recovery processes. Each element is critical to ensuring an organization can maintain operational integrity under attack. ... However, the reality is that many organisations are unable to keep up. From the company's recent survey released in late January 2024, 79% of IT and security decision-makers said they did not have full confidence in their company’s cyber resilience strategy. Just 12% said their data security, management, and recovery capabilities had been stress tested in the six months prior to being surveyed. ... To bolster cyber resilience, companies must integrate a robust combination of people, processes, and technology. Fostering a skilled workforce equipped to detect and respond to threats effectively starts with having employee education and training in place to keep pace with the rising sophistication of AI-driven phishing attacks.



Quote for the day:

"If you want to achieve excellence, you can get there today. As of this second, quit doing less-than-excellent work." -- Thomas J. Watson