Daily Tech Digest - August 22, 2024

A Brief History of Data Ethics

The roots of the practice of data ethics can be traced back to the mid-20th century when concerns about privacy and confidentiality began to emerge alongside the growing use of computers for data processing. The development of automated data collection systems raised questions about who had access to personal information and how it could be misused. Early ethical discussions primarily revolved around protecting individual privacy rights and ensuring the responsible handling of sensitive data. One pivotal moment came with the enactment of the Fair Information Practice Principles (FIPPs) in the United States in the 1970s. These principles, which emphasized transparency, accountability, and user control over personal data, laid the groundwork for modern data protection laws and influenced ethical debates globally. ... Ethical guidelines such as those proposed by the European Union’s General Data Protection Regulation (GDPR) emphasize the importance of informed consent, limiting the collection of data to its intended use, and data minimization. All these concepts are part of an ethical approach to data and its usage. 


Collaborative AI in Building Architecture

As a design practice fascinated by the practical deployment of AI, we can’t help but be reminded of the early days of the personal computer, as this also had a high impact on the design of workplace. Back in the 1980s, most computers were giant, expensive mainframes that only large companies and universities could afford. But then, a few visionary companies started putting computers on desktops, from workplaces, to schools and finally homes. Suddenly, computing power was accessible to everyone but it needed different spaces. ... As with any powerful new tool, AI also brings with it profound challenges and responsibilities. One significant concern is the potential for AI to perpetuate or even amplify biases present in the data it is trained on, leading to unfair or discriminatory outcomes. AI bias is already prevalent and it is crucial we learn how to teach AI to discern bias. Not so easy. AI could also be used maliciously, e.g. to create deepfakes or spread misinformation. There are also legitimate concerns about the impact of AI on jobs and the workforce, but equally how it improves and inspires that workforce.


The Deeper Issues Surrounding Data Privacy

Corporate legal departments will continue to draft voluminous agreement contracts packed with fine print provisions and disclaimers. CIOs can’t avoid this, but they can make a case to clearly present to users of websites and services how and under what conditions data is collected and shared. Many companies are doing this—and are also providing "Opt Out" mechanisms for users who are uncomfortable with the corporate data privacy policy. That said, taking these steps can be easier said than done. There are the third-party agreements that upper management makes that include provisions for data sharing, and there is also the issue of data custody. For instance, if you choose to store some of your customer data on a cloud service and you no longer have direct custody of your data, and the cloud provider experiences a breach that comprises your data, whose fault is it? Once again, there are no ironclad legal or federal mandates that address this issue-but insurance companies do tackle it. “In a cloud environment, the data owner faces liability for losses resulting from a data breach, even if the security failures are the fault of the data holder (cloud provider),” says Transparity Insurance Services.


A survival guide for data privacy in the age of federal inaction

First, organizations should map or inventory their data to understand what they have. By mapping and inventorying data, organizations can better visualize, contextualize and prioritize risks. And, by knowing what data you have, not only can you manage current privacy compliance risks, but you can also be better prepared to respond to new requirements. As an example, those data maps can allow you to see the data flows you have in place where you are sharing data – a key to accurately reviewing your third-party risks. In addition to be able to prepare for existing, and new, privacy laws, it also allows organizations to be able to identify their data flows to minimize risk exposure or compromise by being able to better understand where you are distributing your data. Secondly, companies should think through how to operationalize priority areas to embed them in your business. This might be through training of privacy champions and adopting technology to automate privacy compliance obligations such as implementing an assessments program that allows you to better understand data-related impact.


The Struggle To Test Microservices Before Merging

End-to-end testing is really where the rubber meets the road, and we get the most reliable tests when sending in requests that actually hit all dependencies and services to form a correct response. Integration testing at the API or frontend level using real microservice dependencies offers substantial value. These tests assess real behaviors and interactions, providing a realistic view of the system’s functionality. Typically, such tests are run post-merge in a staging or pre-production environment, often referred to as end-to-end (E2E) testing. ... What we really want is a realistic environment that can be used by any developer, even at an early stage of working on a PR. Achieving the benefits of API and frontend-level testing pre-merge would save effort on writing and maintaining mocks while testing real system behaviors. This can be done using canary-style testing in a shared baseline environment, akin to canary rollouts but in a pre-production context. To clarify that concept: We want to try running a new version of code on a shared staging environment, where that experimental code won’t break staging for all the other development teams, the same way a canary deploy can go out, break in production and not take down the service for everyone.


Neurotechnology is becoming widespread in workplaces – and our brain data needs to be protected

Neurotechnology has long been used in the field of medicine. Perhaps the most successful and well known example are cochlear implants, which can restore hearing. But neurotechnology is now becoming increasingly widespread. It is also becoming more sophisticated. Earlier this year, tech billionaire Elon Musk’s firm Neuralink implanted the first human patient with one of its computer brain chips, known as “Telepathy”. These chips are designed to enable people to translate thoughts into action. More recently, Musk revealed a second human patient had one of his firm’s chips implanted in their brain. ... These concerns are heightened by a glaring gap in Australia’s current privacy laws – especially as they relate to employees. These laws govern how companies lawfully collect and use their employees’ personal information. However, they do not currently contain provisions that protect some of the most personal information of all: data from our brains. ... As the Australian government prepares to introduce sweeping reforms to privacy legislation this month, it should take heed of these international examples and address the serious privacy risks presented by neurotechnology used in workplaces.


I Said I Was Technically a CISO, Not a Technical CISO

Often a CISO will not come from a technical background, or their technical background is long in their career rearview mirror. Can a CISO be effective today without a technical background? And how do you keep up on your technical chops once you get the role? ... We often talk about the need for a CISO to serve as a bridge to the rest of the business, but a CISO’s role still needs to be grounded in technical proficiency, argues Jeff Hancock, who’s the CISO over at Access Point Technology in a recent LinkedIn post. Now, many CISOs come from a technical background, but it becomes hard to maintain once you’re in a CISO role. Geoff says that while no one can be a master in all technical disciplines, CISOs should make a goal of selecting a few to retain mastery of over a long-term plan. Now, Andy, I’ll say, does this reflect your experience? Is this a matter of credibility with the rest of the security team, or does a technical understanding allow a CISO to do their job better? As you were a CISO, how much of your technical skills were sort of staying intact?


API security starts with API discovery

Because APIs tend to change quickly, it’s essential to update the API inventory continuously. A manual change-control process can be used, but this is prone to breakdowns between the development and security teams. The best way to establish a continuous discovery process is to adopt a runtime monitoring system that discovers APIs from real user traffic, or to require the use of an API gateway, or both. These options yield better oversight of the development team than relying on manual notifications to the security team as API changes are made. ... Threats can arise from outside or inside the organization, via the supply chain, or by attackers who either sign up as paying customers, or take over valid user accounts to stage an attack. Perimeter security products tend to focus on the API request alone, but inspecting API requests and responses together gives insight into additional risks related to security, quality, conformance, and business operations. There are so many factors involved when considering API risks that reducing this to a single number is helpful, even if the scoring algorithm is relatively simple.


3 key strategies for mitigating non-human identity risks

The first step of any breach response activity is to understand if you’re actually impacted; the ability to quickly identify any impacted credentials associated with the third-party experiencing the incident is key. You need to be able to determine what the NHIs are connected to, who is utilizing them, and how to go about rotating them without disrupting critical business processes, or at least understand those implications prior to rotation. We know that in a security incident, speed is king. Being able to outpace attackers and cut down on response time through documented processes, visibility, and automation can be the difference between mitigating direct impact from a third-party breach, or being swept up in a list of organizations impacted due to their third-party relationships. ... When these factors change from baseline activity associated with NHIs they may be indicative of nefarious activity and warrant further investigation, or even remediation, if an attack or compromise is confirmed. Security teams are not only regularly stretched thin, but they also often lack a deep understanding across the organization’s entire application and third-party ecosystem as well as insights into what assigned permissions and associated usage is appropriate.


The Rising Cost of Digital Incidents: Understanding and Mitigating Outage Impact

Causal AI for DevOps promises a bridge between observability and automated digital incident response. By ‘Causal AI for DevOps’ I mean causal reasoning software that applies machine learning (ML) to automatically capture cause and effect relationships. Causal AI has the potential to help dev and ops teams better plan for changes to code, configurations or load patterns, so they can stay focused on achieving service-level and business objectives instead of firefighting. With Causal AI for DevOps, many of the incident response tasks that are currently manual can be automated: When service entities are degraded or failing and affecting other entities that makeup business services, causal reasoning software surfaces the relationship between the problem and the symptoms it is causing. The team with responsibility for the failing or degraded service is immediately notified so they can get to work resolving the problem. Some problems can be remediated automatically. Notifications can be sent to end users and other stakeholders, letting them know that their services are affected along with an explanation for why this occurred and when things will be back to normal. 



Quote for the day:

"Holding on to the unchangeable past is a waste of energy, and serves no purpose in creating a better future." -- Unknown

Daily Tech Digest - August 21, 2024

Use the AI S-curve to drive meaningful technological change

The S-curve is a graphical representation of how technology matures over time. It starts slowly, with early adopters, specialized use cases, and technocrats. As the technology proves its value, it enters a phase of rapid growth where adoption accelerates and becomes more widely integrated into various industries and applications. However, as technology advances, becoming cheaper, faster, and more efficient, it inevitably reaches some logical limit and settles into a natural “top” of the S-curve. When a technology reaches its limit, progress is relatively slow, typically requiring significant increases in complexity. ... As new technologies like AI emerge and mature, organizations must balance the need to stay competitive with the potential risks and uncertainties associated with early adoption. This challenge is not new. In his book, The Innovator’s Dilemma, Clayton Christensen describes the difficult choice companies face between maintaining their existing, profitable business models and investing in new, potentially disruptive technologies. So, how can organizations navigate this decision? One approach is to ensure that there is a dedicated unit that operates on a long takt time, outside the quarterly or annual reporting pressure. 


How to Present the Case for a Larger IT Budget

Outright rejection of a budget expansion request is unlikely, but not impossible. "The important thing is not to take rejection personally -- it's the case that's rejected, not the person who presented it," Biswas says. It's also important to understand that the rejection is not necessarily wrong. "Sometimes, we get too close to our ideas to evaluate them impartially," he explains. Understand, too, that a rejection isn't necessarily forever, since issues that prevented approval can be addressed and resolved to present a more convincing case. It's important to fully understand stakeholders' individual interests as well as their tactical and strategic goals, Hachmann advises. "This approach requires a strong understanding of the respective priorities of each person involved in the budget approval processes." he says. "With this [tactic], you'll be better equipped to align IT initiatives and their costs with the stakeholders' business strategy." IT leaders often make the mistake of generalizing on how a bigger budget will improve IT instead of communicating the ways it will help the business. "IT leaders should be careful they're not perceived as 'empire builders' instead of business leaders who want what's best for the larger organization," Biswas says. 


Beyond Orchestration: A Comprehensive Approach to IaC Strategy

In large organizations, enforcing a single IaC tool across all departments is often impractical. Today, there are a diversity of tools that cater to different stacks, strengths and collaboration with developers — from those that are native to a specific platform (CloudFormation or ARM for Azure), and those for multicloud or cloud native, from Terraform and OpenTofu, to Helm and Crossplane, and those that cater to developers like Pulumi or AWS Cloud Development Kit (CDK). Different teams may prefer different tools based on their expertise, use cases or specific project requirements. A robust IaC strategy must account for:The coexistence of multiple IaC tools within the organization. Visibility across various IaC implementations. Governance and compliance across diverse IaC ecosystems. Ignoring this multi-IaC reality can lead to silos, reduced visibility and governance challenges. ... As DevOps and platform engineers, we’ve developed a platform that we ourselves have needed over many years of managing cloud fleets at scale. A platform that addresses not just tooling and orchestration, but all aspects of a comprehensive IaC strategy can be the difference between 2 a.m. downtime and a good night’s sleep.


DevSecOps Needs Are Shifting in the Cloud-Native Era

A cornerstone activity for any DevSecOps team is to secure secrets — that is, the passwords and access credentials that allow access to services and applications. Marks noted that despite many respondents having tools in place for secret scanning or detection, the highest number of incidents (32%) were from secrets stolen from a repository. The study also included data on frequency of usage of tools. She said that it showed that scanning takes place periodically, including daily, multiple times per week, or weekly, but this was not aligned with code pushes or development processes. "So, this is an area for much improvement as scanning takes resources and time and should align better with developer workflows," Marks said. ...  Having so many tools can introduce such challenges as gaining consistency across development teams, dealing with alert fatigue, or determining which remediations are needed and/or how remediation can mitigate risk. ... "Instead, a third-party platform that can support multiple tools can serve as a governance layer to help orchestrate the usage of needed tools, collect data, and help security teams more efficiently gain the visibility they need, apply the right controls and processes, and determine needed actions."


Exclusive: How Piramidal is using AI to decode the human brain

The company is first fine-tuning its model for the neuro ICU; that product will be able to ingest EEG data and interpret in near-real time, providing outputs to medical staff on occurrence and diagnosis of disorders such as seizures, traumatic brain bleeding, inflammations and other brain dysfunctions. “It is truly an assistant to the doctor,” said Pahuja, noting that the model can ideally help provide quicker and more accurate diagnoses that can save doctors’ time and get patients the care they need much more quickly. “Brainwaves are central to neurology diagnosis,” Piramidal co-founder and CEO Dimitris Sakellariou, who holds a PhD in neuroscience, told VentureBeat. By automating analysis and enhancing understanding through large models, personalized treatment can be revolutionized and diseases can be predicted earlier in their progression, he noted. And, as wireless EEG sensors become more mainstream, models like Piramidal’s can enable the creation of personalized agents that “continuously measure and monitor brain health.” “These agents will offer real-time insights into how patients respond to new treatments and how their conditions may evolve,” said Sakellariou.


What is ‘model collapse’? An expert explains the rumours about an impending AI doom

There are hints developers are already having to work harder to source high-quality data. For instance, the documentation accompanying the GPT-4 release credited an unprecedented number of staff involved in the data-related parts of the project. We may also be running out of new human data. Some estimates say the pool of human-generated text data might be tapped out as soon as 2026. It’s likely why OpenAI and others are racing to shore up exclusive partnerships with industry behemoths such as Shutterstock, Associated Press and NewsCorp. They own large proprietary collections of human data that aren’t readily available on the public internet. However, the prospects of catastrophic model collapse might be overstated. Most research so far looks at cases where synthetic data replaces human data. In practice, human and AI data are likely to accumulate in parallel, which reduces the likelihood of collapse. The most likely future scenario will also see an ecosystem of somewhat diverse generative AI platforms being used to create and publish content, rather than one monolithic model. This also increases robustness against collapse.


Custodians looking to beat offenders in the GenAI cybersecurity battle

While the security, as well as the global technology community, seem united at somehow regulating this new-age technology, there are a limited number of things that can actually be done. “There are two ways to combat attacks enabled by the widespread use of GenAI,” Kashifuddin said. “For internal threats, it comes down to deploying ‘cyber for GenAI’. For external threats, the use of ‘GenAI for cyber’ defense is becoming more of a reality and evolving quickly.” The use of cyber for Gen AI threats simply means applying fundamental controls to protect company resources from a GenAI-based attack, he explained. “Traditional data protection tools like Data Loss Prevention (DLP), Cloud Access Security Broker (CASB) when used in conjunction with web proxies amplify a company’s ability to detect and restrict exfiltration of sensitive data to external GenAI services.” “GenAI for cyber” refers to a growing class of techniques using GenAI to combat GenAI-induced attacks. Apart from advanced phishing detection and automated incident response, this includes a bunch of new ways to better the model in order to neutralize adversarial activities. “The discipline of protecting AI systems is just beginning to evolve, but there are some interesting techniques for that already,” Barros said. 


New phishing method targets Android and iPhone users

ESET analysts discovered a series of phishing campaigns targeting mobile users that used three different URL delivery mechanisms. These mechanisms include automated voice calls, SMS messages, and social media malvertising. The voice call delivery is done via an automated call that warns the user about an out-of-date banking app and asks the user to select an option on the numerical keyboard. After the correct button is pressed, a phishing URL is sent via SMS, as was reported in a tweet. Initial delivery by SMS was performed by sending messages indiscriminately to Czech phone numbers. The message sent included a phishing link and text to socially engineer victims into visiting the link. The malicious campaign was spread via registered advertisements on Meta platforms like Instagram and Facebook. These ads included a call to action, like a limited offer for users who “download an update below.” After opening the URL delivered in the first stage, Android victims are presented with two distinct campaigns, either a high-quality phishing page imitating the official Google Play store page for the targeted banking application, or a copycat website for that application. From here, victims are asked to install a “new version” of the banking app.


The Cloud Talent Crisis: Skills Shortage Drives Up Costs, Risks

"Cloud complexity is growing by the day, and with it, the challenge of responding to security threats," he said. "Organizations need more skilled engineers to deal with attacks — or even notice them." He noted that phishing attacks, password leakage, and third-party attacks — the three biggest threats reported in this year's survey — are even more dangerous without skilled, well-resourced personnel. ... "For me, cloud waste is the biggest concern," he said. "It means more money goes where it shouldn't, less money is available to hire talented staff, and fewer resources are available to that staff." ... "This approach can result in a faster pace of innovation, better mapping of features with customer requirements, and additional cost savings opportunities," he explained. Some components of this strategy include working backwards from the customer, organizing teams around products, keeping development teams small, and reducing risk through iteration. "There is a clear correlation between a lack of skilled talent and a lack of cloud maturity," O'Neill said. "High-maturity organizations tend to establish cloud principles and then strictly adhere to them."


The critical imperative of data center physical security: Navigating compliance regulations

In an increasingly digital world where data is often considered the new currency, data centers serve as the fortresses that safeguard the invaluable assets of organizations. While we often associate data security with firewalls, encryption, and cyber threats, it's imperative not to overlook the significance of physical security within these data fortresses. By assessing risks associated with physical security, environmental factors, and access controls, data center operators can take proactive measures to mitigate said risks. These measures greatly aid data centers in preventing unauthorized access, which can lead to data theft, service disruptions, and financial losses. Additionally, failing to meet compliance regulations can result in severe legal consequences and damage to an organization's reputation. In a perfect world, simply implementing iron-clad physical barriers and adhering to compliance regulations would completely eliminate the risk of data breaches. Unfortunately, that’s simply not the case. Both data center security and compliance encompass not only both cybersecurity and physical security, but secure data sanitization and destruction as well. 



Quote for the day:

"Personal leadership is the process of keeping your vision and values before you and aligning your life to be congruent with them." -- Stephen R. Covey

Daily Tech Digest - August 20, 2024

Humanoid robots are a bad idea

Humanoid robots that talk, perceive social and emotional cues, elicit empathy and trust, trigger psychological responses through eye contact and who trick us into the false belief that they have inner thoughts, intentions and even emotions create for humanity what I consider a real problem. Our response to humanoid robots is based on delusion. Machines — tools, really — are being deliberately designed to hack our human hardwiring and deceive us into treating them as something they’re not: people. In other words, the whole point of humanoid robots is to dupe the human mind, to mislead us into have the kind of connection with these machines formerly reserved exclusively for other human beings. Why are some robot makers so fixated on this outcome? Why isn’t the goal instead to create robots that are perfectly designed for their function, rather than perfectly designed to trick the human mind? Why isn’t there a movement to make sure robots do not elicit false emotions and beliefs. What’s the harm in preserving our intuition that a robot is just a machine, just a tool? Why try to route around that intuition with machines that trick our minds, coopting or hijacking our human empathy?


11 Irritating Data Quality Issues

Organizations need to put data quality first and AI second. Without dignifying this sequence, leaders fall into fear of missing out (FOMO) in attempts to grasp AI-driven cures to either competitive or budget pressures, and they jump straight into AI adoption before conducting any sort of honest self-assessment as to the health and readiness of their data estate, according to Ricardo Madan, senior vice president at global technology, business and talent solutions provider TEKsystems. “This phenomenon is not unlike the cloud migration craze of about seven years ago, when we saw many organizations jumping straight to cloud-native services, after hasty lifts-and-shifts, all prior to assessing or refactoring any of the target workloads. This sequential dysfunction results in poor downstream app performance since architectural flaws in the legacy on-prem state are repeated in the cloud,” says Madan in an email interview. “Fast forward to today, AI is a great ‘truth serum’ informing us of the quality, maturity, and stability of a given organization’s existing data estate -- but instead of facing unflattering truths, invest in holistic AI data readiness first, before AI tools."


CISOs urged to prepare now for post-quantum cryptography

Post-quantum algorithms often require larger key sizes and more computational resources compared to classical cryptographic algorithms, a challenge for embedded systems, in particular. During the transition period, systems will need to support both classical and post-quantum algorithms to support interoperability with legacy systems. Deidre Connolly, cryptography standardization research engineer at SandboxAQ, explained: “New cryptography generally takes time to deploy and get right, so we want to have enough lead time before quantum threats are here to have protection in place.” Connolly added: “Particularly for encrypted communications and storage, that material can be collected now and stored for a future date when a sufficient quantum attack is feasible, known as a ‘Store Now, Decrypt Later’ attack: upgrading our systems with quantum-resistant key establishment protects our present-day data against upcoming quantum attackers.” Standards bodies, hardware and software manufacturers, and ultimately businesses across the globe will have to implement new cryptography across all aspects of their computing systems. Work is already under way, with vendors such as BT, Google, and Cloudflare among the early adopters.


AI for application security: Balancing automation with human oversight

Security testing should be integrated throughout Application Delivery Pipelines, from design to deployment. Techniques such as automated vulnerability scanning, penetration testing, continuous monitoring, and many others are essential. By embedding compliance and risk assessment tasks into underlying change management processes, IT professionals can ensure that security testing is at the core of everything they do. Incorporating these strategies at the application component level ensures alignment with business needs to effectively prioritize results, identify attacks, and mitigate risks before they impact the network and infrastructure. ... To build a security-first mindset, organizations must embed security best practices into their culture and workflows. If new IT professionals coming into an organization are taught that security-first isn’t a buzzword, but instead the way the organization operates, it becomes company culture. Making security an integral part of the application delivery pipelines ensures that security policies and processes align with business goals. Education and communication are key—security teams must work closely with developers to ensure that security requirements are understood and valued. 


TSA biometrics program is evolving faster than critics’ perceptions

Privacy impact assessments (PIAs) are not only carried out for each new or changed process, but also published and enforced. The images of U.S. citizens captured by the TSA may be evaluated and used for testing, but they are deleted within 12 hours. Travelers have the choice of opting out of biometric identity verification, in which case they go through a manual ID check, just like decades ago. As happened previously with body scanners, TSA has adapted the signage it uses to notify the public about its use of biometrics. Airports where TSA uses biometrics now have signs that state in bold letters that participation is optional, explain how it works and include QR codes for additional information. The technology is also highly accurate, with tests showing 99.97% accurate verifications. In the cases which do not match, the traveler must then go through the same manual procedure used previously and also in cases where people opt out. TSA does not use biometrics to match people against mugshots from local police departments, for deportations or surveillance. In contrast, the proliferation of CCTV cameras observing people on their way to the airport and back home is not mentioned by Senator Merkley.


Blockchain: Redefining property transactions and ownership

Blockchain’s core strength lies in its ability to create a secure, immutable ledger of transactions. In the real estate context, this means that all details related to a property transaction— from the initial agreement to the final transfer of ownership—are recorded in a way that cannot be altered or tampered with. Blockchain technology empowers brokers to streamline transactions and enhance transparency, allowing them to focus on offering personalised insights and strategic advice. This shift enables brokers to provide a more efficient and cost-effective service while maintaining their advisory role in the real estate process. Another innovative application of blockchain in real estate is through smart contracts. These are digital contracts that automatically execute when certain conditions are met, ensuring that the terms of an agreement are fulfilled without the need for manual oversight. In real estate, smart contracts can be used to automate everything from title transfers to escrow arrangements. This automation not only speeds up the process but also reduces the chances of disputes, as all terms are clearly defined and executed by the technology itself. Beyond improving the efficiency of transactions, blockchain also has the potential to change how we think about property ownership. 


Agile Reinvented: A Look Into the Future

There’s no denying that agile is poised at a pivotal juncture, especially given the advent of AI. While no one knows how AI will influence agile in the long term, it is already shaping how agile teams are structured and how its members approach their work, including using AI tools to code or write user stories and jobs to be done. To remain relevant and impactful, agile must be responsive to the evolving needs of the workforce. Younger developers, in particular, seek more room for creativity. New approaches to agile team formation—including Team and Org Topologies or FaST, which relies on elements of dynamic reteaming instead of fixed team structures to tackle complex work—are emerging to create space for innovation. Since agile was built upon the values of putting people first and adapting to change, it can, and should, continue to empower teams to drive innovation within their organizations. This is the heart of modern agile: not blindly adhering to a set of rules but embracing and adapting its principles to your team’s unique circumstances. As agile continues to evolve, we can expect to see it applied in even more varied and innovative ways. For example, it already intersects with other methodologies like DevSecOps and Lean to form more comprehensive frameworks. 


Breaking Free from Ransomware: Securing Your CI/CD Against RaaS

By embracing a proactive DevSecOps mindset, we can repel RaaS attacks and safeguard our code. Here’s your toolkit: ... Don’t wait until deployment to tighten the screws. Integrate security throughout the software development life cycle (SDLC). Leverage software composition analysis (SCA) and software bill of materials (SBOM) creation, helping you scrutinize dependencies for vulnerabilities and maintain a transparent record of every software component in your pipeline. ... Your pipelines aren’t static entities; they are living ecosystems demanding constant Leveraging tools to implement continuous monitoring and logging of pipeline activity. Look for anomalies, suspicious behaviors and unauthorized access attempts. Think of it as having a cybersecurity hawk perpetually circling your pipelines, detecting threats before they take root. ... Minimize unnecessary access to your CI/CD environment. Enforce strict role-based access controls and least privilege Utilize access control tools to manage user roles and permissions tightly, ensuring only authorized users can interact with sensitive resources. Remember, the 2022 GitHub vulnerability exposed the dangers of lax access control in CI/CD environments.


Achieving cloudops excellence

Although there are no hard-and-fast rules regarding how much to spend on cloudops as a proportion of the cost of building or migrating applications, I have a few rules of thumb. Typically, enterprises should spend 30% to 40% of their total cloud computing budget on cloud operations and management. This covers monitoring, security, optimization, and ongoing management of cloud resources. ... Cloudops requires a new skill set. Continuous training and development programs that focus on operational best practices are vital. This transforms the IT workforce from traditional system administrators to cloud operations specialists who are adept at leveraging cloud environments’ nuances for efficiency. Beyond technical implementations, enterprise leaders must cultivate a culture that prioritizes operational readiness as much as innovation. The essential components are clear communication channels, cross-departmental collaboration, and well-defined roles. Organizational coherence enables firms to pivot and adapt swiftly to the changing tides of technology and market demands. It’s also crucial to measure success by deployment achievements and ongoing performance metrics. By setting clear operational KPIs from the outset, companies ensure that cloud environments are continuously aligned with business objectives. 


What high-performance IT teams look like today — and how to build one

“Today’s high-performing teams are hybrid, dynamic, and autonomous,” says Ross Meyercord, CEO of Propel Software. “CIOs need to create a clear vision and articulate and model the organization’s values to drive alignment and culture.” High-performance teams are self-organizing and want significant autonomy in prioritizing work, solving problems, and leveraging technology platforms. But most enterprises can’t operate like young startups with complete autonomy handed over to devops and data science teams. CIOs should articulate a technology vision that includes agile principles around self-organization and other non-negotiables around security, data governance, reporting, deployment readiness, and other compliance areas. ... High-performance teams are often involved in leading digital transformation initiatives where conflicts around priorities and solutions among team members and stakeholders can arise. These conflicts can turn into heated debates, and CIOs sometimes have to step in to help manage challenging people issues. “When a CIO observes misaligned goals or intra-IT conflict, they need to step in immediately to prevent organizational scar tissue from forming,” says Meyercord of Propel Software. 



Quote for the day:

"Don't necessarily and sharp edges. Occasionally they are necessary to leadership." -- Donald Rumsfeld

Daily Tech Digest - August 17, 2024

The importance of connectivity in IoT

There is no point in having IoT if the connectivity is weak. Without reliable connectivity, the data from sensors and devices, which are intended to be collected and analysed in real-time, might end up being delayed when they are eventually delivered. In healthcare, in real-time, connected devices monitor the vital signs of the patient in an intensive-care ward and alert the physician to any observations that are outside of the specified limits. ...  The future evolution of connectivity technologies will combine with IoT to significantly expand its capabilities. The arrival of 5G will enable high-speed, low-latency connections. This transition will usher in IoT systems that were previously impossible, such as self-driving vehicles that instantaneously analyse vehicle states and provide real-time collision avoidance. The evolution of edge computing will bring data-processing closer to the edge (the IoT devices), thereby significantly reducing latency and bandwidth costs. Connectivity underpins almost everything we see as important with IoT – the data exchange, real-time usage, scale and interoperability we access in our systems.


Aren’t We Transformed Yet? Why Digital Transformation Needs More Work

When it comes to enterprise development, platforms alone can’t address the critical challenge of maintaining consistency between development, test, staging, and production environments. What teams really need to strive for is a seamless propagation of changes between environments made production-like through synchronization and have full control over the process. This control enables the integration of crucial safety steps such as approvals, scans, and automated testing, ensuring that issues are caught and addressed early in the development cycle. Many enterprises are implementing real-time visualization capabilities to provide administrators and developers with immediate insight into differences between instances, including scoped apps, store apps, plugins, update sets, and even versions across the entire landscape. This extended visibility is invaluable for quickly identifying and resolving discrepancies before they can cause problems in production environments. A lack of focus on achieving real-time multi-environment visibility is akin to performing a medical procedure without an X-ray, CT, or MRI of the patient. 


Why Staging Doesn’t Scale for Microservice Testing

So are we doomed to live in a world where staging is eternally broken? As we’ve seen, traditional approaches to staging environments are fraught with challenges. To overcome these, we need to think differently. This brings us to a promising new approach: canary-style testing in shared environments. This method allows developers to test their changes in isolation within a shared staging environment. It works by creating a “shadow” deployment of the services affected by a developer’s changes while leaving the rest of the environment untouched. This approach is similar to canary deployments in production but applied to the staging environment. The key benefit is that developers can share an environment without affecting each other’s work. When a developer wants to test a change, the system creates a unique path through the environment that includes their modified services, while using the existing versions of all other services. Moreover, this approach enables testing at the granularity of every code change or pull request. This means developers can catch issues very early in the development process, often before the code is merged into the main branch. 


A world-first law in Europe is targeting artificial intelligence. Other countries can learn from it

The act contains a list of prohibited high-risk systems. This list includes AI systems that use subliminal techniques to manipulate individual decisions. It also includes unrestricted and real-life facial recognition systems used by by law enforcement authorities, similar to those currently used in China. Other AI systems, such as those used by government authorities or in education and healthcare, are also considered high risk. Although these aren’t prohibited, they must comply with many requirements. ... The EU is not alone in taking action to tame the AI revolution. Earlier this year the Council of Europe, an international human rights organisation with 46 member states, adopted the first international treaty requiring AI to respect human rights, democracy and the rule of law. Canada is also discussing the AI and Data Bill. Like the EU laws, this will set rules to various AI systems, depending on their risks. Instead of a single law, the US government recently proposed a number of different laws addressing different AI systems in various sectors. ... The risk-based approach to AI regulation, used by the EU and other countries, is a good start when thinking about how to regulate diverse AI technologies.


Building constructive partnerships to drive digital transformation

The finance team needs to have a ‘seat at the table’ from the very beginning to overcome these challenges and effect successful transformation. Too often, finance only becomes involved when it comes to the cost and financing of the project, and when finance leaders do try to become involved, they can have difficulty gaining access to the needed data. This was recently confirmed by members of the Future of Finance Leadership Advisory Group, where almost half of the group polled (47%) noted challenges gaining access to needed data. As finance professionals understand the needs of stakeholders within the business, they are in the best position to outline what is needed for IT to create an effective, efficient structure. Finance professionals are in-house consultants who collaborate with other functions to understand their workings and end-to-end procedures, discover where both problems and opportunities exist, identify where processes can be improved, and ultimately find solutions. Digital transformation projects rely on harmonizing processes and standardizing systems across different operations. 


DevSecOps: Integrating Security Into the DevOps Lifecycle

The core of DevSecOps is ‘security as code’, a principle that dictates embedding security into the software development process. To keep every release tight on security, we weave those practices into the heart of our CI/CD flow. Automation is key here, as it smooths out the whole security gig in our dev process, ensuring we are safe from the get-go without slowing us down. A shared responsibility model is another pillar of DevSecOps. Security is no longer the sole domain of a separate security team but a shared concern across all teams involved in the development lifecycle. Working together, security isn’t just slapped on at the end but baked into every step from start to finish. ... Adopting DevSecOps is not without its challenges. Shifting to DevSecOps means we’ve got to knock down the walls that have long kept our devs, ops and security folks in separate corners. Balancing the need for rapid deployment with security considerations can be challenging. To nail DevSecOps, teams must level up their skills through targeted training. Weaving together seasoned systems with cutting-edge DevSecOps tactics calls for a sharp, strategic approach. 


Critical Android Vulnerability Impacting Millions of Pixel Devices Worldwide

This backdoor vulnerability, undetectable by standard security measures, allows unauthorized remote code execution, enabling cybercriminals to compromise devices without user intervention or knowledge due to the app’s privileged system-level status and inability to be uninstalled. The Showcase.apk application possesses excessive system-level privileges, enabling it to fundamentally alter the phone’s operating system despite performing a function that does not necessitate such high permissions. An application’s configuration file retrieval lacks essential security measures, such as domain verification, potentially exposing the device to unauthorized modifications and malicious code execution through compromised configuration parameters. The application suffers from multiple security vulnerabilities. Insecure default variable initialization during certificate and signature verification allows bypass of validation checks. Configuration file tampering risks compromise, while the application’s reliance on bundled public keys, signatures, and certificates creates a bypass vector for verification.


Using Artificial Intelligence in surgery and drug discovery

“We’re seeing how AI is adapting, learning, and starting to give us more suggestions and even take on some independent tasks. This development is particularly thrilling because it spans across diagnostics, therapeutics, and theranostics—covering a wide range of medical areas. We’re on the brink of AI and robotics merging together in a very meaningful way,” Dr Rao said. However, he said he would like to add a word of caution. He said he often tells junior enthusiasts who are eager to use AI in everything: AI is not a replacement for natural stupidity. ... He said that one of the most impressive applications of this AI was during the preparation of a US FDA application, which is typically a very cumbersome and expensive process. “At that point, I’d already completed the preclinical phase but wasn’t certain about the additional 20-30 tests I might need. Instead of spending hundreds of thousands of dollars on trial and error, we fed all our data into this AI system. Now, it’s important to note that pharma companies are usually reluctant to share their proprietary data, so gathering information is often a challenge,” he said.  


Mastercard Is Betting on Crypto—But Not Stablecoins

“We’re opening up this crypto purchase power to our 100 million-plus acceptance locations,” Raj Dhamodharan, Mastercard's head of crypto and blockchain, told Decrypt. “If consumers want to buy into it, if they want to be able to use it, we want to enable that—in a safe way.” Perhaps in the name of safety, the new MetaMask Card isn’t compatible with most cryptocurrencies. You can’t use it to buy a plane ticket with Pepecoin, or a sandwich with SHIB. The card is only compatible with dominant stablecoins USDT and USDC, as well as wrapped Ethereum. ... Dhamodharan and his team are currently endeavoring to create an alternative system to stablecoins that—instead of putting crypto companies like Circle and Tether in the catbird seat of the new digital economy—keeps payment services like Mastercard, and traditional banks, at center. Key to this plan is unlocking the potential of bank deposits, which already exist on digital ledgers—just not ones that live on-chain. Dhamodharan estimates that some $15 trillion worth of digital bank deposits currently exist in the United States alone.


A Group Linked To Ransomhub Operation Employs EDR-Killing Tool

Experts believe RansomHub is a rebrand of the Knight ransomware. Knight, also known as Cyclops 2.0, appeared in the threat landscape in May 2023. The malware targets multiple platforms, including Windows, Linux, macOS, ESXi, and Android. The operators used a double extortion model for their RaaS operation. Knight ransomware-as-a-service operation shut down in February 2024, and the malware’s source code was likely sold to the threat actor who relaunched the RansomHub operation. ... “One main difference between the two ransomware families is the commands run through cmd.exe. While the specific commands may vary, they can be configured either when the payload is built or during configuration. Despite the differences in commands, the sequence and method of their execution relative to other operations remain the same.” states the report published by Symantec. Although RansomHub only emerged in February 2024, it has rapidly grown and, over the past three months, has become the fourth most prolific ransomware operator based on the number of publicly claimed attacks.



Quote for the day:

"When your values are clear to you, making decisions becomes easier." -- Roy E. Disney

Daily Tech Digest - August 16, 2024

W3C issues new technical draft for verifiable credentials standards

Part of the promise of the W3C standards is the ability to share only the data that’s necessary for a completing a secure digital transaction, Goodwin explained, noting that DHS’s Privacy Office is charged with “embedding and enforcing privacy protections and transparency in all DHS activities.” DHS was brought into the process to review the W3C Verifiable Credentials Data Model and Decentralized Identifiers framework and to advise on potential issues. DHS S&T said in a statement last month that “part of the promise of the W3C standards is the ability to share only the data required for a transaction,” which it sees as “an important step towards putting privacy back in the hands of the people.” “Beyond ensuring global interoperability, standards developed by the W3C undergo wide reviews that ensure that they incorporate security, privacy, accessibility, and internationalization,” said DHS Silicon Valley Innovation Program Managing Director Melissa Oh. “By helping implement these standards in our digital credentialing efforts, S&T, through SVIP, is helping to ensure that the technologies we use make a difference for people in how they secure their digital transactions and protect their privacy.”


Managing Technical Debt in the Midst of Modernization

Rather than delivering a product and then worrying about technical debt, it is more prudent to measure and address it continuously from the early stages of a project, including requirement and design, not just the coding phase. Project teams should be incentivized to identify improvement areas as part of their day-to-day work and implement the fixes as and when possible. Early detection and remediation can help streamline IT operations, improve efficiencies, and optimize cost. ... Inadequate technical knowledge or limited experience in the latest skills itself leads to technical debt. Enterprises must invest and prioritize continuous learning to keep their talent pool up to date with the latest technologies. A skill-gap analysis helps forecast the need for skills for future initiatives. Teams should be encouraged to upskill in AI, cloud, and other latest technologies, as well as modern design and security standards. This will help enterprises address the technical debt skill-gap effectively. Enterprises can also employ a hub and spoke model, where a central team offers automation and expert guidance while each development team maintains their own applications, systems and related technical debt.


Generative AI Adoption: What’s Fueling the Growth?

The banking, financial services, and insurance (BFSI) sector is another area where generative AI is making a significant impact. In this industry, generative AI enhances customer service, risk management, fraud detection, and regulatory compliance. By automating routine tasks and providing more accurate and timely insights, generative AI helps financial institutions improve efficiency and deliver better services to their customers. For instance, generative AI can be used to create personalized customer experiences by analyzing customer data and predicting their needs. This capability allows banks to offer tailored products and services, improving customer satisfaction and loyalty. ... The life sciences sector stands to benefit enormously from the adoption of generative AI. In this industry, generative AI is used to accelerate drug discovery, facilitate personalized medicine, ensure quality management, and aid in regulatory compliance. By automating and optimizing various processes, generative AI helps life sciences companies bring new treatments to market more quickly and efficiently. For instance, generative AI can largely draw on masses of biological data to find a probable medication, much faster than conventional means. 


Overcoming Software Testing ‘Alert Fatigue’

Before “shift left” became the norm, developers would write code that quality assurance testing teams would then comb through and identify the initial bugs in the product. Developers were then only tasked with reviewing the proofed end product to ensure it functioned as they initially envisioned. But now, the testing and quality control onus has been put on developers earlier and earlier. An outcome of this dynamic is that developers are becoming increasingly numb to the high volume of bugs they are coming across in the process, and as a result, they are pushing bad code to production. ... Organizations must ensure that vital testing phases are robust and well-defined to mitigate these adverse outcomes. These phases should include comprehensive automated testing, continuous integration (CI) practices, and rigorous manual testing by dedicated QA teams. Developers should focus on unit and integration tests, while QA teams handle system, regression acceptance, and exploratory testing. This division of labor enables developers to concentrate on writing and refining code while QA specialists ensure the software meets the highest quality standards before production.


SSD capacities set to surge as industry eyes 128 TB drives

Maximum SSD capacity is expected to double from its current 61.44 TB maximum by mid-2025, giving us 122 TB and even 128 TB drives, with the prospect of exabyte-capacity racks. Five suppliers have discussed and/or demonstrated prototypes of 100-plus TB capacity SSDs recently. ... Systems with enclosures full of high-capacity SSDs will need to cope with drive failure and that means RAID or erasure coding schemes. SSD rebuilds take less time than HDD rebuilds but higher-capacity SSDs take longer. Looking at a 61.44 TB Solidigm D5-P5336 drive, its max sequential write bandwidth is 3 GBps. For example, rebuilding a 61.44 TB Solidigm D5-P5336 drive with a max sequential write bandwidth of 3 GBps would take approximately 5.7 hours. A 128 TB drive will take 11.85 hours at the same 3 GBps write rate. These are not insubstantial periods. Kioxia has devised an SSD RAID parity compute offload scheme with a parity compute block in the SSD controller and direct memory access to neighboring SSDs to get the rebuild data. This avoids the host server’s processor getting involved in RAID parity compute IO and could accelerate SSD rebuild speed.


Putting Individuals Back In Charge Of Their Own Identities

Digital identity comprises many signals to ensure it can accurately reflect the real identity of the relevant individual. It includes biometric data, ID data, phone data, and much more. In shareable IDs, these unique features are captured through a combination of AI and biometrics which provide robust protection against forgery and replication, and so provide a high assurance that a person is who they say they are. Importantly, these technologies provide an easy and seamless alternative to other verification processes. For most people, visiting a bank branch to prove their identity with paper documents is no longer convenient, while knowledge-based authentication, like entering your mother’s maiden name, is not viable because data breaches make this information readily for sale to nefarious actors. It’s no wonder that 76% of consumers find biometrics more convenient, while 80% find it more secure than other options.  ... A shareable identity is a user-controlled identity credential that can be stored on a device and used remotely. Individuals can then simply re-use the same digital ID to gain access to services without waiting in line, offering time-saving convenience for all.


Revolutionizing cloud security with AI

Generative AI can analyze data from various sources, including social media, forums, and the dark web. AI models use this data to predict threat vectors and offer actionable insights. Enhanced threat intelligence systems can help organizations better understand the evolving threat landscape and prepare for potential attacks. Moreover, machine learning algorithms can automate threat detection across cloud environments, increasing the efficiency of incident response times. ... AI-driven automation is becoming helpful in handling repetitive security tasks, allowing human security professionals to focus on more complex challenges. Automation helps streamline and triage alerts, incident response, and vulnerability management. AI algorithms can process incident data faster than human operators, enabling quicker resolution and minimizing potential damage. ... AI models can enforce privacy policies by monitoring data access while ensuring compliance with regulations such as the General Data Protection Regulation in the U.K., or the California Consumer Privacy Act. When bolstered by AI, homomorphic encryption and differential privacy techniques offer ways to analyze data while keeping sensitive information secure and anonymous.


Are CIOs at the Helm of Leading Generative AI Agenda?

The growing integration of generative AI into corporate technology and information infrastructures is likely to bring a notable shift to the role of CIOs. While many technology leaders are already spearheading gen AI adoption, their role goes beyond technology management. It now includes driving strategic growth and maintaining a competitive edge in an AI-driven landscape. ... The CIO role has evolved significantly over recent decades. Once focused primarily on maintaining system uptime and availability, CIOs now serve as key business enablers. As technology advances rapidly and organizations increasingly rely on IT, the CIO's influence on enterprise success continues to grow. According to the EY survey, CIOs who report directly to the CEO and co-lead the AI agenda are the most effective in driving strategic change. Sixty-three percent of CIOs are leading the gen AI agenda in their organizations, with CEOs close behind at 55%. Eighty-four percent of organizations where the gen AI agenda is co-led by the CIO and CEO achieve or anticipate achieving a 2x return on investment from gen AI, compared to only 56% of organizations where the agenda is led solely by CIOs.


Intel and Karma partner to develop software-defined car architecture

Instead of all those individual black boxes, each with a single job, the new approach is to consolidate the car's various functions into domains, with each domain being controlled by a relatively powerful car computer. These will be linked via Ethernet, usually with a master domain controller overseeing the entire network. We're already starting to see vehicles designed with this approach; the McLaren Artura, Audi Q6 e-tron, and Porsche Macan are all recent examples of software-defined vehicles. Volkswagen Group—which owns Audi and Porsche—is also investing $5 billion in Rivian specifically to develop a new software-defined vehicle architecture for future electric vehicles. In addition to advantages in processing power and weight savings, software-defined vehicles are easier to update over-the-air, a must-have feature since Tesla changed that paradigm. Karma and Intel say their architecture should also have other efficiency benefits. ... Intel is also contributing its power management SoC to get the most out of inverters, DC-DC converters, chargers, and as you might expect, the domain controllers use Intel silicon as well, apparently with some flavor of AI enabled.


Why the next Ashley Madison is just around the corner

Unfortunately, it’s not a matter of ‘if’ another huge data breach will occur – it’s simply a matter of when. Today organisations of all sizes, not just the big players, have a ticking time bomb on their hands with the potential to detonate their brand reputation and destroy customer loyalty. ... Due to a lack of dedicated cybersecurity teams and finite financial resources to allocate to protective measures, small organisations will often prove easier to successfully infiltrate when compared to the average big player. The potential reward from a single attack may be smaller, but hackers can combine successful attacks against multiple SMEs to match the financial gain of successfully hacking a large organisation, and with far less effort. SMEs are therefore increasingly likely to fall victim to financially crippling attacks, with 46% of all cyber breaches now impacting businesses with fewer than 1,000 employees. ... The very first step in any attack chain is always the use of tools to gather intelligence about the victims systems, version numbers of (not patched) software in use and insecure configuration or programming. Any hacker, whether a professional or amateur, is using scanning bots or relying on websites like Shodan.io, generating an attack list of victims with vulnerable software. 



Quote for the day:

“No one knows how hard you had to fight to become who you are today.” -- Unknown

Daily Tech Digest - August 15, 2024

Better Cloud Security Means Getting Back to Basics

Securing the cloud isn’t rocket science – it just requires a little extra knowledge. While it’s tempting to think of the cloud as a new frontier in computing (and, in some ways, it is), cloud security solutions have been around for almost as long as the cloud itself. The trouble is that most organizations don’t know how they should think about cloud security in the first place. ... A good starting point for many organizations is simply evaluating how effective their existing cloud security is. It isn’t enough to implement security solutions – even if they’re the right solutions. It’s also important to know that they are functioning as intended. Today’s organizations have more testing and validation tools at their fingertips than ever, and conducting breach and attack simulation, automated red teaming, and other exercises can lay bare where vulnerabilities and inefficiencies exist. Recent testing reveals that the basic security suites offered by the leading cloud providers are not enough to detect all – or even most – attack activity, highlighting the areas where organizations need to implement new protections and providing insight into what additional solutions may be necessary.


Cloud Waste Management: How to Optimize Your Cloud Resources

To better understand cloud waste, we need to understand the iron triangle of project management, which states that there is always a tradeoff between speed, quality, and cost. If you want to deliver a quality product/feature quickly, it will cost you more. Businesses are always trying to innovate and deliver continuous value to their customers. Often, it means putting pressure on the delivery teams to improve time to market. As an effect, there is the over provisioned capacity of resources; multiple resources that were provisioned to validate theory or concept were not deleted as the teams moved on either delivering the accepted solutions or to another project assignment. This is one of the major factors of cloud waste. ... Since you pay for each resource provisioned in the cloud, managing cloud waste becomes critical, as it directly impacts your business’s bottom line. CFOs and finance teams struggle to manage the forecast and budget for cloud spend as they never know what capacity is wasted in the cloud, and there is no good way to review it regularly.


Campus NaaS: Transforming Enterprise Networking

The flexibility of the NaaS model allows businesses to experiment with new technologies and use cases without the risk of large, upfront investments in hardware and expertise. This is particularly valuable as emerging technologies like AI and edge computing become more prevalent in enterprise environments. ... The potential benefits of Campus NaaS are significant and organizations must carefully evaluate potential NaaS providers. Standards-based solutions ensure interoperability between different NaaS components and service providers allowing businesses to seamlessly integrate NaaS solutions from various vendors without compatibility issues. Security capabilities, and long-term roadmaps should also be considered. Campus NaaS is poised to play a pivotal role in shaping the future of enterprise networking, enabling businesses to build the agile, high-performance foundations needed to thrive in an increasingly digital world. As the technology continues to evolve and mature, we can expect to see even more innovative use cases and deployment models emerge, further cementing the role of Campus NaaS as a cornerstone of modern enterprise IT strategy. 


Applying Security Everywhere – How to Prioritise Risks Across Multiple Platforms

For IT architects and security teams, the joint challenge here is actually one of the oldest ones in IT – knowing what you have. Getting an accurate inventory of all your software assets and components is a hard task on one platform, let alone across internal datacenter deployments, web applications, public cloud implementations and modern cloud-native applications. Keeping this inventory up to date is harder still, given how much change will take place over time across the entire application estate. Alongside this inventory, there are other factors to consider. Not all applications are created equal, and an issue in an internal web application that is used by a few people every month will not be as important as a critical vulnerability in a business application that is responsible for generating revenue every day. Yet both of these applications may have a flaw, and alerts sent to request fixes or updates get made. Internal processes and workflows will also affect the situation. While security teams might spot potential issues in an application or software component like an API, they will not be responsible for making the change themselves. 


Attempting Digital Transformation? Try Embracing Team Resistance

Resistance to transformation has several causes, Dewal says. First off, many logistics professionals already feel slammed, and don’t welcome the idea of new work. “It can feel like an add-on, creating competing priorities,” she says. Then there’s a fear-based resistance to the perceived complexity of the new tasks involved. “It’s too complex and we don’t have the right skill sets to be able to execute on them,” she says, describing this mindset. “Collectively, let’s call it the fear of failure, of getting it wrong.” Finally, there’s the familiar human tendency to prefer sticking with the status quo. “That can hide variations underneath it,” Dewal says. “Sometimes the team is not even sure why the transformation is needed. Sometimes, they feel like they’re not getting enough support in terms of executing it.” Further, the survey dug into two types of resistance – productive and unproductive. Productive resistance is the type that comes from on-the-ground knowledge and expertise that relates to the implementation itself. ... Leaders who avoided a top-down, change-or-die approach, and instead focused on communication and collaboration, had much better chance of success, the survey found.


How leading CISOs build business-critical cyber cultures

In information security, where risk is widespread, attacks are becoming increasingly sophisticated, and so much is on the line, one defining attributes of successful CISOs is their courage. The good news is, courage is a muscle that can be developed just like any other. It’s also a mindset. The CISOs on this panel described various internal motivators that keep them in the game, resilient, and adaptable, even in the face of daunting challenges. They made it clear that it’s a lot easier to be courageous when you’re driven by a love for what you do and maintain a clear line of sight to the impact you’re making. One of the common threads is their focus on “moments of truth,” those points of contact between cybersecurity and various stakeholders. Leaders who are intentional about this find they’re better able to see around corners and show up more strategically as business enablers. Rodgers says it’s a lesson she learned in the early days of her career when she worked on a help desk. Fielding complaints all day takes its own kind of courage. “But the beauty of it is, you get to know people and how they work,” she says. “I got to a point where I could anticipate what they were going to want, so I started proactively providing those things. ...”


How passkeys eliminate password management headaches

There are several usability challenges that could affect the adoption of passkeys. Key among them is compatibility, as passkeys may not work on outdated operating systems or older devices. Bypassing the technical roadblocks, user resistance is often the reason for a failure to adopt new technology such as passkeys. After all, users have been leveraging passwords since the early 1960’s. Emphasizing training and education on how to provision passkeys is essential to adoption, as registration could be challenging for non-tech-savvy users. It may be best to start with small groups or departments to address unique challenges within the organization’s diverse culture and educate users. Organizations are starting to adopt passkeys to enhance security and optimize productivity, and as with any new implementation, there will be challenges. Passkey implementation should begin with top-level leadership as early adopters, which will help employees buy in and ensure a smooth transition from traditional passwords to passkeys. Upfront investment in planning, and creating robust policies and processes, will be critical to the implementation’s success.


Six Common Digital Transformation Challenges

Aligned leadership helps in allocating resources efficiently, prioritizing initiatives that drive the most value, and mitigating risks associated with digital transformation efforts. Clear, consistent communication from aligned leaders also builds trust and motivates teams to adapt to new paradigms. Ultimately, leadership alignment serves as the backbone of successful digital transformation by driving coherent strategies and fostering an environment conducive to innovation and agility. Effective communication is paramount, with transparent discussions about goals, challenges, and expected outcomes. Additionally, establishing cross-functional teams can help integrate diverse perspectives, facilitating smoother transitions during technology adoption. By embedding these practices into the organizational fabric, leaders can drive successful digital transformation while maintaining strategic coherence. Addressing resistance to change and fostering a digital mindset among leaders is pivotal in navigating this digital transformation challenge. Resistance often stems from a fear of the unknown and a reluctance to abandon established processes. 


Why Can’t Automation Eliminate Configuration Errors?

The emergence of configuration intelligence changes the game in several ways. First, it means that anyone tasked with maintaining configurations can save a lot of time and trouble that used to involve manual, tedious but cognitively intense tasks like reading through YAML manifests or config files to identify tiny errors. Yes, some tools existed to do this before, but they mostly functioned more like “linters,” spotting obvious syntax errors. By simplifying the process, time to manually maintain configs is drastically reduced. ... The lack of detailed expertise has been a traditional problem of IaC products, which struggle to keep up with configuration recommendations across the dozens of software applications and infrastructure components they manage and automate. The lack of detailed configuration expertise also creates a cadre of in-house experts, who become key sources of institutional memory — but also major risks. When your load-balancing guru walks out the door to take another job, then everything they know that’s not clearly documented goes out the door too.


Enterprise spending on cloud services keeps accelerating

“Enterprises are also choosing to house an ever-growing proportion of their data center gear in colocation facilities, further reducing the need for on-premise data center capacity. The rise of generative AI technology and services will only exacerbate those trends over the next few years, as hyperscale operators are better positioned to run AI operations than most enterprises,” he wrote. Dinsdale told me the workloads staying on-premises tend to be workloads that are either very complex and cannot easily be transitioned, are focused on highly sensitive data, are governed or influenced by regulatory issues, or are highly predictable and can be managed economically on premise. Enterprises worldwide are spending around $100 billion per year on their own data center IT hardware and associated infrastructure software, which has held flat for the last several years/ By comparison, enterprises are now spending $80 billion per quarter on cloud services; not to mention another $65 billion per quarter on SaaS. “And those cloud and SaaS numbers are growing like gangbusters,” he said.



Quote for the day:

"The whole point of getting things done is knowing what to leave undone." -- Lady Stella Reading