Daily Tech Digest - August 17, 2024

The importance of connectivity in IoT

There is no point in having IoT if the connectivity is weak. Without reliable connectivity, the data from sensors and devices, which are intended to be collected and analysed in real-time, might end up being delayed when they are eventually delivered. In healthcare, in real-time, connected devices monitor the vital signs of the patient in an intensive-care ward and alert the physician to any observations that are outside of the specified limits. ...  The future evolution of connectivity technologies will combine with IoT to significantly expand its capabilities. The arrival of 5G will enable high-speed, low-latency connections. This transition will usher in IoT systems that were previously impossible, such as self-driving vehicles that instantaneously analyse vehicle states and provide real-time collision avoidance. The evolution of edge computing will bring data-processing closer to the edge (the IoT devices), thereby significantly reducing latency and bandwidth costs. Connectivity underpins almost everything we see as important with IoT – the data exchange, real-time usage, scale and interoperability we access in our systems.


Aren’t We Transformed Yet? Why Digital Transformation Needs More Work

When it comes to enterprise development, platforms alone can’t address the critical challenge of maintaining consistency between development, test, staging, and production environments. What teams really need to strive for is a seamless propagation of changes between environments made production-like through synchronization and have full control over the process. This control enables the integration of crucial safety steps such as approvals, scans, and automated testing, ensuring that issues are caught and addressed early in the development cycle. Many enterprises are implementing real-time visualization capabilities to provide administrators and developers with immediate insight into differences between instances, including scoped apps, store apps, plugins, update sets, and even versions across the entire landscape. This extended visibility is invaluable for quickly identifying and resolving discrepancies before they can cause problems in production environments. A lack of focus on achieving real-time multi-environment visibility is akin to performing a medical procedure without an X-ray, CT, or MRI of the patient. 


Why Staging Doesn’t Scale for Microservice Testing

So are we doomed to live in a world where staging is eternally broken? As we’ve seen, traditional approaches to staging environments are fraught with challenges. To overcome these, we need to think differently. This brings us to a promising new approach: canary-style testing in shared environments. This method allows developers to test their changes in isolation within a shared staging environment. It works by creating a “shadow” deployment of the services affected by a developer’s changes while leaving the rest of the environment untouched. This approach is similar to canary deployments in production but applied to the staging environment. The key benefit is that developers can share an environment without affecting each other’s work. When a developer wants to test a change, the system creates a unique path through the environment that includes their modified services, while using the existing versions of all other services. Moreover, this approach enables testing at the granularity of every code change or pull request. This means developers can catch issues very early in the development process, often before the code is merged into the main branch. 


A world-first law in Europe is targeting artificial intelligence. Other countries can learn from it

The act contains a list of prohibited high-risk systems. This list includes AI systems that use subliminal techniques to manipulate individual decisions. It also includes unrestricted and real-life facial recognition systems used by by law enforcement authorities, similar to those currently used in China. Other AI systems, such as those used by government authorities or in education and healthcare, are also considered high risk. Although these aren’t prohibited, they must comply with many requirements. ... The EU is not alone in taking action to tame the AI revolution. Earlier this year the Council of Europe, an international human rights organisation with 46 member states, adopted the first international treaty requiring AI to respect human rights, democracy and the rule of law. Canada is also discussing the AI and Data Bill. Like the EU laws, this will set rules to various AI systems, depending on their risks. Instead of a single law, the US government recently proposed a number of different laws addressing different AI systems in various sectors. ... The risk-based approach to AI regulation, used by the EU and other countries, is a good start when thinking about how to regulate diverse AI technologies.


Building constructive partnerships to drive digital transformation

The finance team needs to have a ‘seat at the table’ from the very beginning to overcome these challenges and effect successful transformation. Too often, finance only becomes involved when it comes to the cost and financing of the project, and when finance leaders do try to become involved, they can have difficulty gaining access to the needed data. This was recently confirmed by members of the Future of Finance Leadership Advisory Group, where almost half of the group polled (47%) noted challenges gaining access to needed data. As finance professionals understand the needs of stakeholders within the business, they are in the best position to outline what is needed for IT to create an effective, efficient structure. Finance professionals are in-house consultants who collaborate with other functions to understand their workings and end-to-end procedures, discover where both problems and opportunities exist, identify where processes can be improved, and ultimately find solutions. Digital transformation projects rely on harmonizing processes and standardizing systems across different operations. 


DevSecOps: Integrating Security Into the DevOps Lifecycle

The core of DevSecOps is ‘security as code’, a principle that dictates embedding security into the software development process. To keep every release tight on security, we weave those practices into the heart of our CI/CD flow. Automation is key here, as it smooths out the whole security gig in our dev process, ensuring we are safe from the get-go without slowing us down. A shared responsibility model is another pillar of DevSecOps. Security is no longer the sole domain of a separate security team but a shared concern across all teams involved in the development lifecycle. Working together, security isn’t just slapped on at the end but baked into every step from start to finish. ... Adopting DevSecOps is not without its challenges. Shifting to DevSecOps means we’ve got to knock down the walls that have long kept our devs, ops and security folks in separate corners. Balancing the need for rapid deployment with security considerations can be challenging. To nail DevSecOps, teams must level up their skills through targeted training. Weaving together seasoned systems with cutting-edge DevSecOps tactics calls for a sharp, strategic approach. 


Critical Android Vulnerability Impacting Millions of Pixel Devices Worldwide

This backdoor vulnerability, undetectable by standard security measures, allows unauthorized remote code execution, enabling cybercriminals to compromise devices without user intervention or knowledge due to the app’s privileged system-level status and inability to be uninstalled. The Showcase.apk application possesses excessive system-level privileges, enabling it to fundamentally alter the phone’s operating system despite performing a function that does not necessitate such high permissions. An application’s configuration file retrieval lacks essential security measures, such as domain verification, potentially exposing the device to unauthorized modifications and malicious code execution through compromised configuration parameters. The application suffers from multiple security vulnerabilities. Insecure default variable initialization during certificate and signature verification allows bypass of validation checks. Configuration file tampering risks compromise, while the application’s reliance on bundled public keys, signatures, and certificates creates a bypass vector for verification.


Using Artificial Intelligence in surgery and drug discovery

“We’re seeing how AI is adapting, learning, and starting to give us more suggestions and even take on some independent tasks. This development is particularly thrilling because it spans across diagnostics, therapeutics, and theranostics—covering a wide range of medical areas. We’re on the brink of AI and robotics merging together in a very meaningful way,” Dr Rao said. However, he said he would like to add a word of caution. He said he often tells junior enthusiasts who are eager to use AI in everything: AI is not a replacement for natural stupidity. ... He said that one of the most impressive applications of this AI was during the preparation of a US FDA application, which is typically a very cumbersome and expensive process. “At that point, I’d already completed the preclinical phase but wasn’t certain about the additional 20-30 tests I might need. Instead of spending hundreds of thousands of dollars on trial and error, we fed all our data into this AI system. Now, it’s important to note that pharma companies are usually reluctant to share their proprietary data, so gathering information is often a challenge,” he said.  


Mastercard Is Betting on Crypto—But Not Stablecoins

“We’re opening up this crypto purchase power to our 100 million-plus acceptance locations,” Raj Dhamodharan, Mastercard's head of crypto and blockchain, told Decrypt. “If consumers want to buy into it, if they want to be able to use it, we want to enable that—in a safe way.” Perhaps in the name of safety, the new MetaMask Card isn’t compatible with most cryptocurrencies. You can’t use it to buy a plane ticket with Pepecoin, or a sandwich with SHIB. The card is only compatible with dominant stablecoins USDT and USDC, as well as wrapped Ethereum. ... Dhamodharan and his team are currently endeavoring to create an alternative system to stablecoins that—instead of putting crypto companies like Circle and Tether in the catbird seat of the new digital economy—keeps payment services like Mastercard, and traditional banks, at center. Key to this plan is unlocking the potential of bank deposits, which already exist on digital ledgers—just not ones that live on-chain. Dhamodharan estimates that some $15 trillion worth of digital bank deposits currently exist in the United States alone.


A Group Linked To Ransomhub Operation Employs EDR-Killing Tool

Experts believe RansomHub is a rebrand of the Knight ransomware. Knight, also known as Cyclops 2.0, appeared in the threat landscape in May 2023. The malware targets multiple platforms, including Windows, Linux, macOS, ESXi, and Android. The operators used a double extortion model for their RaaS operation. Knight ransomware-as-a-service operation shut down in February 2024, and the malware’s source code was likely sold to the threat actor who relaunched the RansomHub operation. ... “One main difference between the two ransomware families is the commands run through cmd.exe. While the specific commands may vary, they can be configured either when the payload is built or during configuration. Despite the differences in commands, the sequence and method of their execution relative to other operations remain the same.” states the report published by Symantec. Although RansomHub only emerged in February 2024, it has rapidly grown and, over the past three months, has become the fourth most prolific ransomware operator based on the number of publicly claimed attacks.



Quote for the day:

"When your values are clear to you, making decisions becomes easier." -- Roy E. Disney

Daily Tech Digest - August 16, 2024

W3C issues new technical draft for verifiable credentials standards

Part of the promise of the W3C standards is the ability to share only the data that’s necessary for a completing a secure digital transaction, Goodwin explained, noting that DHS’s Privacy Office is charged with “embedding and enforcing privacy protections and transparency in all DHS activities.” DHS was brought into the process to review the W3C Verifiable Credentials Data Model and Decentralized Identifiers framework and to advise on potential issues. DHS S&T said in a statement last month that “part of the promise of the W3C standards is the ability to share only the data required for a transaction,” which it sees as “an important step towards putting privacy back in the hands of the people.” “Beyond ensuring global interoperability, standards developed by the W3C undergo wide reviews that ensure that they incorporate security, privacy, accessibility, and internationalization,” said DHS Silicon Valley Innovation Program Managing Director Melissa Oh. “By helping implement these standards in our digital credentialing efforts, S&T, through SVIP, is helping to ensure that the technologies we use make a difference for people in how they secure their digital transactions and protect their privacy.”


Managing Technical Debt in the Midst of Modernization

Rather than delivering a product and then worrying about technical debt, it is more prudent to measure and address it continuously from the early stages of a project, including requirement and design, not just the coding phase. Project teams should be incentivized to identify improvement areas as part of their day-to-day work and implement the fixes as and when possible. Early detection and remediation can help streamline IT operations, improve efficiencies, and optimize cost. ... Inadequate technical knowledge or limited experience in the latest skills itself leads to technical debt. Enterprises must invest and prioritize continuous learning to keep their talent pool up to date with the latest technologies. A skill-gap analysis helps forecast the need for skills for future initiatives. Teams should be encouraged to upskill in AI, cloud, and other latest technologies, as well as modern design and security standards. This will help enterprises address the technical debt skill-gap effectively. Enterprises can also employ a hub and spoke model, where a central team offers automation and expert guidance while each development team maintains their own applications, systems and related technical debt.


Generative AI Adoption: What’s Fueling the Growth?

The banking, financial services, and insurance (BFSI) sector is another area where generative AI is making a significant impact. In this industry, generative AI enhances customer service, risk management, fraud detection, and regulatory compliance. By automating routine tasks and providing more accurate and timely insights, generative AI helps financial institutions improve efficiency and deliver better services to their customers. For instance, generative AI can be used to create personalized customer experiences by analyzing customer data and predicting their needs. This capability allows banks to offer tailored products and services, improving customer satisfaction and loyalty. ... The life sciences sector stands to benefit enormously from the adoption of generative AI. In this industry, generative AI is used to accelerate drug discovery, facilitate personalized medicine, ensure quality management, and aid in regulatory compliance. By automating and optimizing various processes, generative AI helps life sciences companies bring new treatments to market more quickly and efficiently. For instance, generative AI can largely draw on masses of biological data to find a probable medication, much faster than conventional means. 


Overcoming Software Testing ‘Alert Fatigue’

Before “shift left” became the norm, developers would write code that quality assurance testing teams would then comb through and identify the initial bugs in the product. Developers were then only tasked with reviewing the proofed end product to ensure it functioned as they initially envisioned. But now, the testing and quality control onus has been put on developers earlier and earlier. An outcome of this dynamic is that developers are becoming increasingly numb to the high volume of bugs they are coming across in the process, and as a result, they are pushing bad code to production. ... Organizations must ensure that vital testing phases are robust and well-defined to mitigate these adverse outcomes. These phases should include comprehensive automated testing, continuous integration (CI) practices, and rigorous manual testing by dedicated QA teams. Developers should focus on unit and integration tests, while QA teams handle system, regression acceptance, and exploratory testing. This division of labor enables developers to concentrate on writing and refining code while QA specialists ensure the software meets the highest quality standards before production.


SSD capacities set to surge as industry eyes 128 TB drives

Maximum SSD capacity is expected to double from its current 61.44 TB maximum by mid-2025, giving us 122 TB and even 128 TB drives, with the prospect of exabyte-capacity racks. Five suppliers have discussed and/or demonstrated prototypes of 100-plus TB capacity SSDs recently. ... Systems with enclosures full of high-capacity SSDs will need to cope with drive failure and that means RAID or erasure coding schemes. SSD rebuilds take less time than HDD rebuilds but higher-capacity SSDs take longer. Looking at a 61.44 TB Solidigm D5-P5336 drive, its max sequential write bandwidth is 3 GBps. For example, rebuilding a 61.44 TB Solidigm D5-P5336 drive with a max sequential write bandwidth of 3 GBps would take approximately 5.7 hours. A 128 TB drive will take 11.85 hours at the same 3 GBps write rate. These are not insubstantial periods. Kioxia has devised an SSD RAID parity compute offload scheme with a parity compute block in the SSD controller and direct memory access to neighboring SSDs to get the rebuild data. This avoids the host server’s processor getting involved in RAID parity compute IO and could accelerate SSD rebuild speed.


Putting Individuals Back In Charge Of Their Own Identities

Digital identity comprises many signals to ensure it can accurately reflect the real identity of the relevant individual. It includes biometric data, ID data, phone data, and much more. In shareable IDs, these unique features are captured through a combination of AI and biometrics which provide robust protection against forgery and replication, and so provide a high assurance that a person is who they say they are. Importantly, these technologies provide an easy and seamless alternative to other verification processes. For most people, visiting a bank branch to prove their identity with paper documents is no longer convenient, while knowledge-based authentication, like entering your mother’s maiden name, is not viable because data breaches make this information readily for sale to nefarious actors. It’s no wonder that 76% of consumers find biometrics more convenient, while 80% find it more secure than other options.  ... A shareable identity is a user-controlled identity credential that can be stored on a device and used remotely. Individuals can then simply re-use the same digital ID to gain access to services without waiting in line, offering time-saving convenience for all.


Revolutionizing cloud security with AI

Generative AI can analyze data from various sources, including social media, forums, and the dark web. AI models use this data to predict threat vectors and offer actionable insights. Enhanced threat intelligence systems can help organizations better understand the evolving threat landscape and prepare for potential attacks. Moreover, machine learning algorithms can automate threat detection across cloud environments, increasing the efficiency of incident response times. ... AI-driven automation is becoming helpful in handling repetitive security tasks, allowing human security professionals to focus on more complex challenges. Automation helps streamline and triage alerts, incident response, and vulnerability management. AI algorithms can process incident data faster than human operators, enabling quicker resolution and minimizing potential damage. ... AI models can enforce privacy policies by monitoring data access while ensuring compliance with regulations such as the General Data Protection Regulation in the U.K., or the California Consumer Privacy Act. When bolstered by AI, homomorphic encryption and differential privacy techniques offer ways to analyze data while keeping sensitive information secure and anonymous.


Are CIOs at the Helm of Leading Generative AI Agenda?

The growing integration of generative AI into corporate technology and information infrastructures is likely to bring a notable shift to the role of CIOs. While many technology leaders are already spearheading gen AI adoption, their role goes beyond technology management. It now includes driving strategic growth and maintaining a competitive edge in an AI-driven landscape. ... The CIO role has evolved significantly over recent decades. Once focused primarily on maintaining system uptime and availability, CIOs now serve as key business enablers. As technology advances rapidly and organizations increasingly rely on IT, the CIO's influence on enterprise success continues to grow. According to the EY survey, CIOs who report directly to the CEO and co-lead the AI agenda are the most effective in driving strategic change. Sixty-three percent of CIOs are leading the gen AI agenda in their organizations, with CEOs close behind at 55%. Eighty-four percent of organizations where the gen AI agenda is co-led by the CIO and CEO achieve or anticipate achieving a 2x return on investment from gen AI, compared to only 56% of organizations where the agenda is led solely by CIOs.


Intel and Karma partner to develop software-defined car architecture

Instead of all those individual black boxes, each with a single job, the new approach is to consolidate the car's various functions into domains, with each domain being controlled by a relatively powerful car computer. These will be linked via Ethernet, usually with a master domain controller overseeing the entire network. We're already starting to see vehicles designed with this approach; the McLaren Artura, Audi Q6 e-tron, and Porsche Macan are all recent examples of software-defined vehicles. Volkswagen Group—which owns Audi and Porsche—is also investing $5 billion in Rivian specifically to develop a new software-defined vehicle architecture for future electric vehicles. In addition to advantages in processing power and weight savings, software-defined vehicles are easier to update over-the-air, a must-have feature since Tesla changed that paradigm. Karma and Intel say their architecture should also have other efficiency benefits. ... Intel is also contributing its power management SoC to get the most out of inverters, DC-DC converters, chargers, and as you might expect, the domain controllers use Intel silicon as well, apparently with some flavor of AI enabled.


Why the next Ashley Madison is just around the corner

Unfortunately, it’s not a matter of ‘if’ another huge data breach will occur – it’s simply a matter of when. Today organisations of all sizes, not just the big players, have a ticking time bomb on their hands with the potential to detonate their brand reputation and destroy customer loyalty. ... Due to a lack of dedicated cybersecurity teams and finite financial resources to allocate to protective measures, small organisations will often prove easier to successfully infiltrate when compared to the average big player. The potential reward from a single attack may be smaller, but hackers can combine successful attacks against multiple SMEs to match the financial gain of successfully hacking a large organisation, and with far less effort. SMEs are therefore increasingly likely to fall victim to financially crippling attacks, with 46% of all cyber breaches now impacting businesses with fewer than 1,000 employees. ... The very first step in any attack chain is always the use of tools to gather intelligence about the victims systems, version numbers of (not patched) software in use and insecure configuration or programming. Any hacker, whether a professional or amateur, is using scanning bots or relying on websites like Shodan.io, generating an attack list of victims with vulnerable software. 



Quote for the day:

“No one knows how hard you had to fight to become who you are today.” -- Unknown

Daily Tech Digest - August 15, 2024

Better Cloud Security Means Getting Back to Basics

Securing the cloud isn’t rocket science – it just requires a little extra knowledge. While it’s tempting to think of the cloud as a new frontier in computing (and, in some ways, it is), cloud security solutions have been around for almost as long as the cloud itself. The trouble is that most organizations don’t know how they should think about cloud security in the first place. ... A good starting point for many organizations is simply evaluating how effective their existing cloud security is. It isn’t enough to implement security solutions – even if they’re the right solutions. It’s also important to know that they are functioning as intended. Today’s organizations have more testing and validation tools at their fingertips than ever, and conducting breach and attack simulation, automated red teaming, and other exercises can lay bare where vulnerabilities and inefficiencies exist. Recent testing reveals that the basic security suites offered by the leading cloud providers are not enough to detect all – or even most – attack activity, highlighting the areas where organizations need to implement new protections and providing insight into what additional solutions may be necessary.


Cloud Waste Management: How to Optimize Your Cloud Resources

To better understand cloud waste, we need to understand the iron triangle of project management, which states that there is always a tradeoff between speed, quality, and cost. If you want to deliver a quality product/feature quickly, it will cost you more. Businesses are always trying to innovate and deliver continuous value to their customers. Often, it means putting pressure on the delivery teams to improve time to market. As an effect, there is the over provisioned capacity of resources; multiple resources that were provisioned to validate theory or concept were not deleted as the teams moved on either delivering the accepted solutions or to another project assignment. This is one of the major factors of cloud waste. ... Since you pay for each resource provisioned in the cloud, managing cloud waste becomes critical, as it directly impacts your business’s bottom line. CFOs and finance teams struggle to manage the forecast and budget for cloud spend as they never know what capacity is wasted in the cloud, and there is no good way to review it regularly.


Campus NaaS: Transforming Enterprise Networking

The flexibility of the NaaS model allows businesses to experiment with new technologies and use cases without the risk of large, upfront investments in hardware and expertise. This is particularly valuable as emerging technologies like AI and edge computing become more prevalent in enterprise environments. ... The potential benefits of Campus NaaS are significant and organizations must carefully evaluate potential NaaS providers. Standards-based solutions ensure interoperability between different NaaS components and service providers allowing businesses to seamlessly integrate NaaS solutions from various vendors without compatibility issues. Security capabilities, and long-term roadmaps should also be considered. Campus NaaS is poised to play a pivotal role in shaping the future of enterprise networking, enabling businesses to build the agile, high-performance foundations needed to thrive in an increasingly digital world. As the technology continues to evolve and mature, we can expect to see even more innovative use cases and deployment models emerge, further cementing the role of Campus NaaS as a cornerstone of modern enterprise IT strategy. 


Applying Security Everywhere – How to Prioritise Risks Across Multiple Platforms

For IT architects and security teams, the joint challenge here is actually one of the oldest ones in IT – knowing what you have. Getting an accurate inventory of all your software assets and components is a hard task on one platform, let alone across internal datacenter deployments, web applications, public cloud implementations and modern cloud-native applications. Keeping this inventory up to date is harder still, given how much change will take place over time across the entire application estate. Alongside this inventory, there are other factors to consider. Not all applications are created equal, and an issue in an internal web application that is used by a few people every month will not be as important as a critical vulnerability in a business application that is responsible for generating revenue every day. Yet both of these applications may have a flaw, and alerts sent to request fixes or updates get made. Internal processes and workflows will also affect the situation. While security teams might spot potential issues in an application or software component like an API, they will not be responsible for making the change themselves. 


Attempting Digital Transformation? Try Embracing Team Resistance

Resistance to transformation has several causes, Dewal says. First off, many logistics professionals already feel slammed, and don’t welcome the idea of new work. “It can feel like an add-on, creating competing priorities,” she says. Then there’s a fear-based resistance to the perceived complexity of the new tasks involved. “It’s too complex and we don’t have the right skill sets to be able to execute on them,” she says, describing this mindset. “Collectively, let’s call it the fear of failure, of getting it wrong.” Finally, there’s the familiar human tendency to prefer sticking with the status quo. “That can hide variations underneath it,” Dewal says. “Sometimes the team is not even sure why the transformation is needed. Sometimes, they feel like they’re not getting enough support in terms of executing it.” Further, the survey dug into two types of resistance – productive and unproductive. Productive resistance is the type that comes from on-the-ground knowledge and expertise that relates to the implementation itself. ... Leaders who avoided a top-down, change-or-die approach, and instead focused on communication and collaboration, had much better chance of success, the survey found.


How leading CISOs build business-critical cyber cultures

In information security, where risk is widespread, attacks are becoming increasingly sophisticated, and so much is on the line, one defining attributes of successful CISOs is their courage. The good news is, courage is a muscle that can be developed just like any other. It’s also a mindset. The CISOs on this panel described various internal motivators that keep them in the game, resilient, and adaptable, even in the face of daunting challenges. They made it clear that it’s a lot easier to be courageous when you’re driven by a love for what you do and maintain a clear line of sight to the impact you’re making. One of the common threads is their focus on “moments of truth,” those points of contact between cybersecurity and various stakeholders. Leaders who are intentional about this find they’re better able to see around corners and show up more strategically as business enablers. Rodgers says it’s a lesson she learned in the early days of her career when she worked on a help desk. Fielding complaints all day takes its own kind of courage. “But the beauty of it is, you get to know people and how they work,” she says. “I got to a point where I could anticipate what they were going to want, so I started proactively providing those things. ...”


How passkeys eliminate password management headaches

There are several usability challenges that could affect the adoption of passkeys. Key among them is compatibility, as passkeys may not work on outdated operating systems or older devices. Bypassing the technical roadblocks, user resistance is often the reason for a failure to adopt new technology such as passkeys. After all, users have been leveraging passwords since the early 1960’s. Emphasizing training and education on how to provision passkeys is essential to adoption, as registration could be challenging for non-tech-savvy users. It may be best to start with small groups or departments to address unique challenges within the organization’s diverse culture and educate users. Organizations are starting to adopt passkeys to enhance security and optimize productivity, and as with any new implementation, there will be challenges. Passkey implementation should begin with top-level leadership as early adopters, which will help employees buy in and ensure a smooth transition from traditional passwords to passkeys. Upfront investment in planning, and creating robust policies and processes, will be critical to the implementation’s success.


Six Common Digital Transformation Challenges

Aligned leadership helps in allocating resources efficiently, prioritizing initiatives that drive the most value, and mitigating risks associated with digital transformation efforts. Clear, consistent communication from aligned leaders also builds trust and motivates teams to adapt to new paradigms. Ultimately, leadership alignment serves as the backbone of successful digital transformation by driving coherent strategies and fostering an environment conducive to innovation and agility. Effective communication is paramount, with transparent discussions about goals, challenges, and expected outcomes. Additionally, establishing cross-functional teams can help integrate diverse perspectives, facilitating smoother transitions during technology adoption. By embedding these practices into the organizational fabric, leaders can drive successful digital transformation while maintaining strategic coherence. Addressing resistance to change and fostering a digital mindset among leaders is pivotal in navigating this digital transformation challenge. Resistance often stems from a fear of the unknown and a reluctance to abandon established processes. 


Why Can’t Automation Eliminate Configuration Errors?

The emergence of configuration intelligence changes the game in several ways. First, it means that anyone tasked with maintaining configurations can save a lot of time and trouble that used to involve manual, tedious but cognitively intense tasks like reading through YAML manifests or config files to identify tiny errors. Yes, some tools existed to do this before, but they mostly functioned more like “linters,” spotting obvious syntax errors. By simplifying the process, time to manually maintain configs is drastically reduced. ... The lack of detailed expertise has been a traditional problem of IaC products, which struggle to keep up with configuration recommendations across the dozens of software applications and infrastructure components they manage and automate. The lack of detailed configuration expertise also creates a cadre of in-house experts, who become key sources of institutional memory — but also major risks. When your load-balancing guru walks out the door to take another job, then everything they know that’s not clearly documented goes out the door too.


Enterprise spending on cloud services keeps accelerating

“Enterprises are also choosing to house an ever-growing proportion of their data center gear in colocation facilities, further reducing the need for on-premise data center capacity. The rise of generative AI technology and services will only exacerbate those trends over the next few years, as hyperscale operators are better positioned to run AI operations than most enterprises,” he wrote. Dinsdale told me the workloads staying on-premises tend to be workloads that are either very complex and cannot easily be transitioned, are focused on highly sensitive data, are governed or influenced by regulatory issues, or are highly predictable and can be managed economically on premise. Enterprises worldwide are spending around $100 billion per year on their own data center IT hardware and associated infrastructure software, which has held flat for the last several years/ By comparison, enterprises are now spending $80 billion per quarter on cloud services; not to mention another $65 billion per quarter on SaaS. “And those cloud and SaaS numbers are growing like gangbusters,” he said.



Quote for the day:

"The whole point of getting things done is knowing what to leave undone." -- Lady Stella Reading

Daily Tech Digest - August 14, 2024

MIT releases comprehensive database of AI risks

While numerous organizations and researchers have recognized the importance of addressing AI risks, efforts to document and classify these risks have been largely uncoordinated, leading to a fragmented landscape of conflicting classification systems. ... The AI Risk Repository is designed to be a practical resource for organizations in different sectors. For organizations developing or deploying AI systems, the repository serves as a valuable checklist for risk assessment and mitigation. “Organizations using AI may benefit from employing the AI Risk Database and taxonomies as a helpful foundation for comprehensively assessing their risk exposure and management,” the researchers write. “The taxonomies may also prove helpful for identifying specific behaviors which need to be performed to mitigate specific risks.” ... The research team acknowledges that while the repository offers a comprehensive foundation, organizations will need to tailor their risk assessment and mitigation strategies to their specific contexts. However, having a centralized and well-structured repository like this reduces the likelihood of overlooking critical risks.


Why Agile Alone Might Not Be So Agile: A Witty Look at Methodology Madness

Agile’s problems often start with a fundamental misunderstanding of what it truly means to be agile. When the Agile Manifesto was penned back in 2001, its authors intended it to be a flexible, adaptable approach to software development, free from the rigid structures and bureaucratic procedures of traditional methodologies. But fast forward to today, and Agile has become its own kind of bureaucratic monster in many organizations — a tyrant disguised as a liberator. Why does this happen? Let’s dissect the two main problems: the roles defined within Agile and the one-size-fits-all mentality that organizations apply to Agile methodology. One of the biggest hurdles to successful Agile adoption is the disconnect between the executive suite and the teams on the ground. Executives often see Agile as a magic bullet for faster delivery and higher productivity, without fully understanding the nuances of the methodology. This disconnect can lead to unrealistic demands and pressure on teams to deliver more with each Sprint, which in turn leads to burnout and decreased quality. Moreover, the Agile Manifesto’s disdain for comprehensive documentation can be problematic in complex projects. 


Feature Flags Wouldn’t Have Prevented the CrowdStrike Outage

Feature flagging is a valuable technique for decoupling the release of new features from code deployment, and advanced feature flagging tools usually support percentage-based rollouts. For example, you can enable a feature on X% of targets to ensure it works before reaching 100%. While it’s true that feature flags can help to prevent outages, given the scale and complexity of the CrowdStrike incident, they would not have been sufficient for three reasons. First, a comprehensive staged rollout requires more than just “gradually enable this flag over the next few days”:There has to be an integration with the monitoring stack to perform health checks and stop the rollout if there are problems. There has to be a way to integrate with the CD pipeline to reuse the list of targets to roll out to and a list of health checks to track. Available feature flagging solutions require much work and expertise to support staged rollout at any reasonable scale. Second, CrowdStrike’s config had a complex structure requiring a “configuration system” and a “content interpreter.” Such configs would benefit from first-class schema support and end-to-end type safety. 


Putting Threat Modeling Into Practice: A Guide for Business Leaders

One of the primary benefits of threat modeling is its ability to reduce the number of defects that make it to production. By identifying potential threats and vulnerabilities during the design phase, companies can implement security measures that prevent these issues from ever reaching the production environment. This proactive approach not only improves the quality of products but also reduces the costs associated with post-production fixes and patches. ... Threat modeling helps us create reusable artifacts and reference patterns as code, which serve as blueprints for future projects. These patterns encapsulate best practices and lessons learned, ensuring that security considerations are consistently applied across all projects. By embedding these reference patterns into development processes, organizations reduce the need to reinvent the wheel for each new product, saving time and resources. ... The existence of well-defined reference patterns reduces the likelihood of errors during development. Developers can rely on these patterns as a guide, ensuring that they follow proven security practices without having to start from scratch. 


The magic of RAG is in the retrieval

The role of the LLM in a RAG system is to simply summarize the data from the retrieval model’s search results, with prompt engineering and fine-tuning to ensure the tone and style are appropriate for the specific workflow. All the leading LLMs on the market support these capabilities, and the differences between them are marginal when it comes to RAG. Choose an LLM quickly and focus on data and retrieval. RAG failures primarily stem from insufficient attention to data access, quality, and retrieval processes. For instance, merely inputting large volumes of data into an LLM with an expansive context window is inadequate if the data is excessively noisy or irrelevant to the specific task. Poor outcomes can result from various factors: a lack of pertinent information in the source corpus, excessive noise, ineffective data processing, or the retrieval system’s inability to filter out irrelevant information. These issues lead to low-quality data being fed to the LLM for summarization, resulting in vague or junk responses. It’s important to note that this isn’t a failure of the RAG concept itself. Rather, it’s a failure in constructing an appropriate “R” — the retrieval model.


What enterprises say the CrowdStrike outage really teaches

CrowdStrike made two errors, enterprises say. First, CrowdStrike didn’t account for the sensitivity of its Falcon client software for endpoints to the tabular data that described how to look for security issues. As a result, an update to that data crashed the client by introducing a condition that had existed before but hadn’t been properly tested. Second, rather than doing a limited release of the new data file that would almost certainly have caught the problem and limited its impact, CrowdStrike pushed it out to its entire user base. ... The 37 who didn’t hold Microsoft accountable pointed out that security software necessarily has a unique ability to interact with the Windows kernel software, and this means it can create a major problem if there’s an error. But while enterprises aren’t convinced that Microsoft contributed to the problem, over three-quarters think Microsoft could contribute to reducing the risk of a recurrence. Nearly as many said that they believed Windows was more prone to the kind of problem CrowdStrike’s bug created, and that view was held by 80 of the 89 development managers, many of whom said that Apple’s MacOS or Linux didn’t pose the same risk and that neither was impacted by the problem.


MIT researchers use large language models to flag problems in complex systems

The researchers developed a framework, called SigLLM, which includes a component that converts time-series data into text-based inputs an LLM can process. A user can feed these prepared data to the model and ask it to start identifying anomalies. The LLM can also be used to forecast future time-series data points as part of an anomaly detection pipeline. While LLMs could not beat state-of-the-art deep learning models at anomaly detection, they did perform as well as some other AI approaches. If researchers can improve the performance of LLMs, this framework could help technicians flag potential problems in equipment like heavy machinery or satellites before they occur, without the need to train an expensive deep-learning model. “Since this is just the first iteration, we didn’t expect to get there from the first go, but these results show that there’s an opportunity here to leverage LLMs for complex anomaly detection tasks,” says Sarah Alnegheimish, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on SigLLM.


Cybersecurity should return to reality and ditch the hype

This shift from educational content to marketing blurs the line between genuine security insights and commercial interests, leading organizations to invest in solutions that may not address their unique challenges. Additionally, buzzword-driven content has become rampant, where terms like “zero-trust architecture” or “blockchain for security” are frequently mentioned in passing without delving into the practicalities and limitations of these technologies. ... we must first recognize the critical distinction between genuine cybersecurity work and the broader tech-centric content that often overshadows it. Real cybersecurity practice is anchored in a relentless pursuit to understand and mitigate the ever-evolving threats to our systems. It is a discipline that demands deep, continuously updated knowledge of systems, networks, and human behavior, alongside a steadfast commitment to the principles of confidentiality, integrity, and availability. True cybersecurity practitioners are those who engage in the laborious tasks of vulnerability assessment, threat modeling, incident response, and the continuous enhancement of security postures, often without the allure of viral recognition or simplistic solutions.


Harnessing AI for 6G: Six Key Approaches for Technology Leaders

Leaders must understand the enabling technologies behind 6G, such as terahertz and quantum communication, and the transformative potential of AI in network deployment and management. ... Engaging with international bodies like the ITU to contribute to the standardization process is crucial. This will ensure AI technologies are integrated into network designs from the beginning. Early involvement in these discussions will also help technology leaders to anticipate future developments and prepare strategies accordingly. ... Advocating for an AI-native 6G network involves embedding large language models and other AI technology into network equipment. This strategy allows autonomous operations and optimizes network management through machine learning algorithms. Such a proactive approach will streamline operations and enhance the reliability and efficiency of the network infrastructure. ... Emphasize the convergence of computing and communication and develop user-centric services that leverage 6G and AI to improve user experiences across various industries. Leaders should focus on creating solutions that are not only technologically advanced but also address the practical needs and preferences of end-users.


GenAI compliance is an oxymoron. Ways to make the best of it

Confoundingly, genAI software sometimes does things that neither the enterprise nor the AI vendor told it to do. Whether that’s making things up (a.k.a. hallucinating), observing patterns no one asked it to look for, or digging up nuggets of highly sensitive data, it spells nightmares for CIOs. This is especially true when it comes to regulations around data collection and protection. How can CIOs accurately and completely tell customers what data is being collected about them and how it is being used when the CIO often doesn’t know exactly what a genAI tool is doing? What if the licensed genAI algorithm chooses to share some of that ultra-sensitive data with its AI vendor parent? “With genAI, the CIO is consciously taking an enormous risk, whether that is legal risk or privacy policy risks. It could result in a variety of outcomes that are unpredictable,” said Tony Fernandes, founder and CEO of user experience agency UEGroup. “If a person chooses not to disclose race, for example, but an AI is able to infer it and the company starts marketing on that basis, have they violated the privacy policy? That’s a big question that will probably need to be settled in court,” he said.



Quote for the day:

"Before you are a leader, success is all about growing yourself. When you become a leader, success is all about growing others" -- Jack Welch

Daily Tech Digest - August 13, 2024

The Tug of War Between Biometrics and Privacy

The strengths of biometric identification can combat fraud. Your fingerprint proves you are you before you conduct a transaction on your mobile banking app, for example. At airports, biometrics identification is implemented as a matter of public safety. Fingerprint biometrics are standard in background checks. Within an enterprise, biometric systems may be used to prevent insider threats, verifying an employee’s identity before they conduct a transaction. Among the myriad use cases for biometrics, the argument for this technology is its convenience and its strengths over traditional measures, such as passwords. Biometric identifiers are unique to the individual and difficult to alter or fake. ... In many scenarios, consent is clearcut. An enterprise has an upfront policy, and users must give their explicit permission to have their biometrics collected. Think of a banking app; you have to click through a series of prompts before you can start using your thumbprint to log into your account. In other situations, consent is not so easily addressed. In an airport, for example, it is possible to opt out of facial recognition, but that might be surprising to many. 


Remember quantum computing in the cloud?

Quantum computing, while promising, is still mainly in the realm of future potential. The industry is making strides towards more advanced qubits and increased stability. However, the practical utility of these advancements remains over the horizon for many organizations. This timeline, coupled with the steep learning curve and investment required, has positioned quantum computing as a slower-evolving technology compared to AI. Moreover, the current quantum offerings, often accessed via cloud platforms, are still primarily experimental. They require specialized knowledge to leverage effectively, whereas GPUs integrated into cloud services can be readily used to scale existing AI operations with relatively lower barriers to entry. Why are generative AI and GPUs so dominant? The answer lies in immediate applicability and results. Businesses today face pressures to innovate faster than ever. Generative AI not only aids in creating innovative solutions but also provides a competitive edge in real-time decision-making processes. It is a tool ready to be wielded, with clear ROI and application pathways that quantum computing has yet to establish fully.


Welcome to the AI revolution: From horsepower to manpower to machine-power

Until very recently, technology was first and foremost a tool. It was something humans built and then used to do a job -- and to do it better, faster, and easier than we could without it. But still, we used technology. What's new with artificial intelligence (AI) is that we are not creating new tools to help us do a job. We are creating a new workforce to do the job for us. This trend is not absolute of course and we can always point to older technologies that may have done part of our job for us (factory automation began at least 200 years ago). However, we are now creating a cheaper, faster, better, scalable workforce, not a cheaper, faster, better, scalable toolset. This new workforce is not going to replace us all any time soon. There are two main reasons for this fact. The first is that the hype of AI far exceeds its current capabilities, except in some narrow, rules-based scenarios. Generative AI in particular appears almost magical in its ability to render text, images and even video. Yet its inability to understand any of its output, along with the volume of data and the power needed to train its models, surely limits it from replacing human workers.


The Crucial Role of Firewall Rule Histories

In the security industry, there are unfortunately many opportunities for organizational learning and improvement after a breach or an attack, regardless of whether they were successful or stopped right away. Beyond the containment and security enhancement steps, firewall rule histories are also necessary to create a comprehensive post-mortem analysis of the breach’s scope and root cause. One of the greatest takeaways from a firewall rule analysis is the insight into a network segmentation weakness or access control mechanism that needs to be addressed to prevent similar attacks from being successful in the future. Understanding the lateral movement of attackers within the network helps in assessing the full extent of compromised systems or data. Rule histories can show security teams whether an attack was conducted quickly, as soon as an attacker gained access; or if it was a slow, methodical process where adjustments were made over time to secure maximum impact when finally set into motion. Security teams can use firewall histories to identify recurring patterns, trends, or systemic vulnerabilities beyond those that lie on the surface. 


CISOs face uncharted territory in preparing for AI security risks

Despite the enormous intellectual, technical, and government resources devoted to creating AI risk models, practical advice for CISOs on how to best manage AI risks is currently in short supply. Although CISOs and security teams have come to understand the supply chain risks of traditional software and code, particularly open-source software, managing AI risks is a whole new ballgame. “The difference is that AI and the use of AI models are new.” Alon Schindel, VP of data and threat research at Wiz tells CSO. “We have never seen technology developed so fast like these models,” he says. “It’s not like the machine learning models of the past. There are some great opportunities here, but the work is not done yet. We still haven’t worked out how to ensure this feature will be the most effective for security teams.” James Robinson, CISO at Netskope, tells CSO, “It’s still very early days. It’s rapidly developing. The research reports are coming out amazingly fast, and there’s a lot of excitement and investment. The landscape continues to evolve. That’s one thing CISOs must be prepared for.” “Newer architecture and newer models are advancing by the second nowadays,” Omar Santos


Powering Industry 4.0 with the intelligent Edge

Successful Edge deployments drive businesses to treat it as an integral part of their business strategy. Meeting the data demands of the latest AI-powered innovations isn't a one-person job. What’s clear is that AI is driving the demand for Edge technologies. To meet this demand, organizations will need to collaborate internally between IT and business teams, and externally with managed service providers (MSPs) who can help navigate legacy systems and protocols. Leveraging the knowledge of MSPs will be integral to finding the most efficient and effective ways for an enterprise to deploy and leverage Edge computing. By embracing the intelligent Edge, businesses can unlock a myriad of benefits from operational efficiency to real-time Actionable AI – the perfect foundation for agile and adaptable operations. As more enterprises look to adopt the latest Edge technologies, this foundation will be critical to ensuring seamless data processing, scalability, and the ability to adapt to evolving business goals, but this demands orchestration across IT and business functions. Keep in mind that going in alone on the journey can prevent enterprises from realizing the full potential of Edge.


The Changing C-Suite: Chief AI Officer In, Chief Diversity Officer Out

Foss explained that the shift toward integrating diversity responsibilities into broader leadership roles is partly due to the increasing expectation to do more with less. "As organizations understand having diverse teams lead to better outcomes and faster value creation, there's a growing consensus that all leaders should be involved in driving these initiatives," Foss said. From the perspective of Caroline Carruthers, CEO of global data consultancy Carruthers and Jackson, the roles that achieve longevity in the C-suite are those that are based around a corporate asset. "That could be anything from finance to people to data to operations to security," she said. ... Subramanian predicted that either the role of the chief diversity officer will evolve to encompass AI or a new role of chief AI officer will have broader oversight across AI and data. "It is likely that chief AI officers will develop close collaboration with security, IT, legal, and line of business leaders," Subramanian said. She added that she believes the roles of chief diversity officer and chief AI officer will merge, as AI needs data and the biggest opportunities with AI have to do with data.


From data to insight to action: The very human challenges of AI transformation

The first step in AI transformation is collecting data, which today is the easiest step. So far, Grantcharov has placed the platform in around 20 operating rooms across the U.S. Through a variety of sensors, the OR black box captured up to 1 million data points per day per site. These included audio-visual data of surgical procedures, electronic health records and input from surgical devices. The data also included biometric readings from the surgical team, such as their heart rate variability as a reflection of stress levels, and brain activity measured by wireless EEGs. ... But here’s where it’s also important to understand humans. AI can correlate OR accidents with certain events, but without a working hypothesis, it’s all just noise. For example, Grantcharov’s team hypothesized that stress could affect a surgeon’s performance by impacting their cognitive processing and decision making. So they designed the experiment to collect physiological data from the surgeons, and AI was able to correlate these data with OR accidents. The finding: Stressed-out surgeons had a 66% higher chance of making an error. ... Finally, systems are procedures or principles put into place that make the desired behavior the easiest to do.


UN Approves Cybercrime Treaty Despite Major Tech, Privacy Concerns

The treaty, passed on Aug. 8, will require a wide variety of companies — financial services, travel, technology, and telecommunications firms — not only to support domestic law enforcement, but to help with requests from treaty signatories, says Nick Ashton-Hart, head of the Cybersecurity Tech Accord delegation to the negotiations. "Unfortunately the draft adopted doesn't resolve any of the issues we raised, or that any other part of the private sector or civil society raised," he says. "Security researchers and penetration testers — as well as investigative journalists, whistleblowers, and others — are at risk of criminal prosecution because of the poor and vague wording in the criminalization chapter." ... "Because the convention allows all cooperation to take place in perpetual secrecy and has no oversight mechanism, the convention invites abusive requests for cooperation that can be used to undermine secure systems relied upon by billions of people and millions of enterprises each day," he says. "Without [cooperation] from the US and EU, there's little value in anyone else joining this..."


What Is Data Trust and Why Does It Matter?

Understanding the importance of data trust is the first step in implementing a program to build trust between the producers and consumers of the data products your company relies on increasingly for its success. Once you know the benefits and risks of making data trustworthy, the hard work of determining the best way to realize, measure, and maintain data trust begins. Among the goals of a data trust program are promoting the company’s privacy, security, and ethics policies, including consent management and assessing the risks of sharing data with third parties. The most crucial aspect of a data trust program is convincing knowledge workers that they can trust AI-based tools. A study released recently by Salesforce found that more than half of the global knowledge workers it surveyed don’t trust the data that’s used to train AI systems, and 56% find it difficult to extract the information they need from AI systems. Of the workers who don’t trust AI training data, three out of four state that the systems don’t have the information they need to be of use.



Quote for the day:

“Don’t let the fear of losing be greater than the excitement of winning.” -- Robert Kiyosaki

Daily Tech Digest - August 12, 2024

In three or four years, ‘we won’t even talk about AI’

In general, there’s a very positive view of AI in tech. In a lot of other industries, there’s some uncertainty, some trepidation, some curiosity. But part of our pulse survey said about three out of four tech workers are using AI on a daily basis. So, the adoption in this portfolio of companies is higher than most, and I’d also said most employers and workers have a very good idea that AI is going to improve their business and their work. ... “I view AI skills as adjacent, additive skills for most people — aside from really hardcore data scientists and AI engineers. This is how most people will work in the new world. Generally, it depends. Some organizations have built whole, distinct AI organizations. Others have built embedded AI domains in all of their job functions. It really depends. There’s a lot of discussion around whether companies should have a chief AI officer. I’m not sure that’s necessary. I think a lot of those functions are already in place. You do need someone in your organization who has a holistic view of the positive sides of this and the risks associated with this.”


The AI Balancing Act: Innovating While Safeguarding Consumer Privacy

There are two sides to every coin. While AI can further compliance efforts, it can also create new privacy and security challenges. This is particularly true today, amid an ongoing global effort to strengthen data privacy laws. 71% of countries have data privacy legislation, and in recent years, this has evolved to encapsulate AI. In the EU, for instance, approval has been secured from the European Parliament around a specific AI regulatory framework. This framework imposes specific obligations on providers of high-risk AI systems and could ban certain AI-powered applications. The fact is, AI-powered technology is immensely powerful. But, it comes with complex challenges to data privacy compliance. A primary concern here relates to purpose limitation, specifically the disclosure provided to consumers regarding the purpose(s) for data processing and the consent obtained. As AI systems evolve, they may find new ways to utilise data, potentially extending beyond the scope of original disclosure and consent agreement. As such, maintaining transparency in AI operations to ensure accurate and appropriate data use disclosures is critical.


Is biometric authentication still effective?

With the rapid advancement and accessibility of technologies, the efficacy and security of biometric authentication methods are under threat. Fraudsters are using spoofing techniques to replicate or falsify biometric data, such as creating synthetic fingerprints or 3D facial models, to fool sensors, mimic legitimate biometric traits and gain unauthorized access to secured services. ... Unlike traditional biometric authentication, which relies on static physical attributes, behavioral biometrics verify user identity based on unique interaction patterns, such as typing rhythm, mouse movements and touchscreen interactions. This shift is essential because behavioral biometrics offer a more dynamic and adaptive layer of security, making it significantly harder for fraudsters to replicate or mask. ... With data scattered across different systems, it’s challenging to correlate information, connect the dots and identify overarching patterns of bad behavior. A decentralized approach causes businesses to overlook crucial fraud indicators and struggle to respond effectively to emerging threats due to the lack of visibility and coordination among disparate fraud prevention tools.


Practical strategies for mitigating API security risks

Identity and access management is crucial for a complete API security strategy. IAM facilitates efficient user management from creation to deactivation and ensures that only authorized individuals access APIs. IAM enables granular access control, granting permissions based on specific attributes and resources rather than just predefined roles. Integration with security information and event management (SIEM) systems enhances security by providing centralized visibility and enabling better threat detection and response. AI and machine learning are revolutionizing API security by providing sophisticated tools that enhance design, testing, threat detection, and overall governance. These technologies improve the robustness and resilience of APIs, enabling organizations to stay ahead of emerging threats and regulatory changes. As AI evolves, its role in API security will become increasingly vital, offering innovative solutions to the complex challenges of safeguarding digital assets. AI in API security goes beyond the limitations of human or rule-based interventions, enabling advanced pattern recognition and automating security audits and governance for greater defense against evolving threats.


The evolution of the CTO – from tech keeper to strategic leader

CTOs have experienced a huge shift in how they are positioned in the workplace. They are no longer part of a small-medium size team that operates separately from the rest of the business; they are the key to tangible business growth and perhaps one of the most crucial parts of a leadership team. The main duty of CTOs is to maintain – and where available, to modernise – tech, and to decide when something has kicked the bucket and no longer has a purpose. These things require people power, specialist skills and money. Needless to say, the investment in the role is vital. Tech leaders often feel burnt out, or worried that they don’t have the resources and support needed to do their job well. ... The saying goes, “You can never set foot in the same river twice,” and the same is true for leaders in tech – everything evolves from the moment you start working on a project. There is much to appreciate about technology that remains stable and adaptable when changes are necessary during development. Today, innovative CTOs are on the lookout for software solutions that come with the flexibility of making that important U-turns if ever needed.


How AIOps Is Transforming IT Operations Management

IT operations management has become increasingly challenging as networks have become larger and more complex, with the introduction of remote workers and the distribution of applications and workloads across networks. Traditional operations management tools and practices struggle to keep up with the ever-growing volumes of data from multiple sources within complex and varied network environments. AIOps was designed to bring the speed, accuracy and predictive capabilities of AI technology to IT operations. AIOps provides contextually enriched, deep end-to-end, real-time insights that can be proactively acted upon, according to Forrester. AIOps solutions use real-time telemetry, developing patterns and historical operational data to perform real-time assessments of what is happening, whether it has happened before or not, what paths it might take, and what negative effects it might have on business operations. ... A "digitally mature" organization has a much better ROI on the AI investment. But because this is a "rolling target" and not static, an organization's IT infrastructure "must be able to adapt and change," Ramamoorthy said.


The cyber assault on healthcare: What the Change Healthcare breach reveals

Many security leaders report that they don’t have adequate resources to implement the needed security measures because they’re often competing with pricey life-saving medical equipment for the limited funds available to spend, Kim says. Furthermore, he says their complex technology environments can make applying and creating security in depth not only more challenging but more costly, too. That, in turn, makes it less likely for CISOs to get the resources they need. Security teams in healthcare also have more challenges in updating and patching systems, Riggi explains, as the sector’s need for 24/7 availability means organizations can’t easily go offline — if they can go offline at all — to perform needed work. Healthcare security leaders also have a rapidly expanding tech environment to secure, as both more partners and more patients with remote medical devices become part of the sector’s already highly interconnected environment, says Errol S. Weiss, chief security officer at Health-ISAC. Such expansion heightens the challenges, complexities and costs of implementing security controls as well as heightening the risks that a successful attack against one point in that web would impact many others.


Solar Power Installations Worldwide Open to Cloud API Bugs

"The issue we discovered lies in the cloud APIs that connect the hardware with the user," both on Solarman's platform and on Deye Cloud, says Bogdan Botezatu, director of threat Research and reporting at Bitdefender. "These APIs have vulnerable endpoints that allow an unauthorized third party to change settings or otherwise control the inverters and data loggers via the vulnerable Solarman and Deye platforms," he says. Bitdefender, for instance, found that the Solarman platform's /oauth2-s/oauth/token API endpoint would let an attacker generate authorization tokens for any regular or business accounts on the platform. "This means that a malicious user could iterate through all accounts, take over any of them and modify inverter parameters or change how the inverter interacts with the grid," Bitdefender said in its report. The security vendor also found Solarman's API endpoints to be exposing an excessive amount of information — including personally identifiable information — about organizations and individuals on the platform. 


6 hard truths of generative AI in the enterprise

“Not a week goes by without another new tool that is mind-blowing in its abilities and potential future impact,’’ agrees David Higginson, chief innovation officer and executive vice president of Phoenix Children’s Hospital. But right now genAI “can really only be executed by a small number of technology giants rather than being tinkered with at a local skunkworks level within a healthcare organization,’’ he says. “Therefore, it feels as if we are in a bit of a paused state waiting for established vendors to deliver mature solutions that can provide the tangible value we all anticipated.” ... The fundamental barriers to adopting genAI are the scarcity and cost of the hardware, power, and data needed to train models, Higginson says. “With such scarcity comes the need to prioritize which solutions have the broadest appeal to the population and can generate the most long-term revenue,’’ he says. ... While research and development continue to push the needle on what genAI can do, “we know that data is a critical aspect to enabling AI solutions and we also recognize that many organizations are uncovering the work it will take to build the right data foundations to support scaled AI deployments,” says Deloitte’s Rowan.


Investing in Capacity to Adapt to Surprises in Software-Reliant Businesses

A well-known and contrarian adage in the Resilience Engineering community is that Murphy's Law - "anything that can go wrong, will" - is wrong. What can go wrong almost never does, but we don't tend to notice that. People engaged in modern work (not just software engineers) are continually adapting what they’re doing, according to the context they find themselves in. They’re able to avoid problems in most everything they do, almost all of the time. When things do go "sideways" and an issue crops up they need to handle or rectify, they are able to adapt to these situations due to the expertise they have. Research in decision-making described in the article Seeing the invisible: Perceptual-cognitive aspects of expertise by Klein, G. A., & Hoffman, R. R. (2020) reveals that while demonstrations of expertise play out in time-pressured and high-consequence events (like incident response), expertise comes from experience with facing varying situations involved with "ordinary" everyday work. It is "hidden" because the speed and ease with which experts do ordinary work contrasts with how sophisticated the work is. 



Quote for the day:

"True leadership must be for the benefit of the followers, not the enrichment of the leaders." -- Robert Townsend