Daily Tech Digest - August 12, 2024

In three or four years, ‘we won’t even talk about AI’

In general, there’s a very positive view of AI in tech. In a lot of other industries, there’s some uncertainty, some trepidation, some curiosity. But part of our pulse survey said about three out of four tech workers are using AI on a daily basis. So, the adoption in this portfolio of companies is higher than most, and I’d also said most employers and workers have a very good idea that AI is going to improve their business and their work. ... “I view AI skills as adjacent, additive skills for most people — aside from really hardcore data scientists and AI engineers. This is how most people will work in the new world. Generally, it depends. Some organizations have built whole, distinct AI organizations. Others have built embedded AI domains in all of their job functions. It really depends. There’s a lot of discussion around whether companies should have a chief AI officer. I’m not sure that’s necessary. I think a lot of those functions are already in place. You do need someone in your organization who has a holistic view of the positive sides of this and the risks associated with this.”


The AI Balancing Act: Innovating While Safeguarding Consumer Privacy

There are two sides to every coin. While AI can further compliance efforts, it can also create new privacy and security challenges. This is particularly true today, amid an ongoing global effort to strengthen data privacy laws. 71% of countries have data privacy legislation, and in recent years, this has evolved to encapsulate AI. In the EU, for instance, approval has been secured from the European Parliament around a specific AI regulatory framework. This framework imposes specific obligations on providers of high-risk AI systems and could ban certain AI-powered applications. The fact is, AI-powered technology is immensely powerful. But, it comes with complex challenges to data privacy compliance. A primary concern here relates to purpose limitation, specifically the disclosure provided to consumers regarding the purpose(s) for data processing and the consent obtained. As AI systems evolve, they may find new ways to utilise data, potentially extending beyond the scope of original disclosure and consent agreement. As such, maintaining transparency in AI operations to ensure accurate and appropriate data use disclosures is critical.


Is biometric authentication still effective?

With the rapid advancement and accessibility of technologies, the efficacy and security of biometric authentication methods are under threat. Fraudsters are using spoofing techniques to replicate or falsify biometric data, such as creating synthetic fingerprints or 3D facial models, to fool sensors, mimic legitimate biometric traits and gain unauthorized access to secured services. ... Unlike traditional biometric authentication, which relies on static physical attributes, behavioral biometrics verify user identity based on unique interaction patterns, such as typing rhythm, mouse movements and touchscreen interactions. This shift is essential because behavioral biometrics offer a more dynamic and adaptive layer of security, making it significantly harder for fraudsters to replicate or mask. ... With data scattered across different systems, it’s challenging to correlate information, connect the dots and identify overarching patterns of bad behavior. A decentralized approach causes businesses to overlook crucial fraud indicators and struggle to respond effectively to emerging threats due to the lack of visibility and coordination among disparate fraud prevention tools.


Practical strategies for mitigating API security risks

Identity and access management is crucial for a complete API security strategy. IAM facilitates efficient user management from creation to deactivation and ensures that only authorized individuals access APIs. IAM enables granular access control, granting permissions based on specific attributes and resources rather than just predefined roles. Integration with security information and event management (SIEM) systems enhances security by providing centralized visibility and enabling better threat detection and response. AI and machine learning are revolutionizing API security by providing sophisticated tools that enhance design, testing, threat detection, and overall governance. These technologies improve the robustness and resilience of APIs, enabling organizations to stay ahead of emerging threats and regulatory changes. As AI evolves, its role in API security will become increasingly vital, offering innovative solutions to the complex challenges of safeguarding digital assets. AI in API security goes beyond the limitations of human or rule-based interventions, enabling advanced pattern recognition and automating security audits and governance for greater defense against evolving threats.


The evolution of the CTO – from tech keeper to strategic leader

CTOs have experienced a huge shift in how they are positioned in the workplace. They are no longer part of a small-medium size team that operates separately from the rest of the business; they are the key to tangible business growth and perhaps one of the most crucial parts of a leadership team. The main duty of CTOs is to maintain – and where available, to modernise – tech, and to decide when something has kicked the bucket and no longer has a purpose. These things require people power, specialist skills and money. Needless to say, the investment in the role is vital. Tech leaders often feel burnt out, or worried that they don’t have the resources and support needed to do their job well. ... The saying goes, “You can never set foot in the same river twice,” and the same is true for leaders in tech – everything evolves from the moment you start working on a project. There is much to appreciate about technology that remains stable and adaptable when changes are necessary during development. Today, innovative CTOs are on the lookout for software solutions that come with the flexibility of making that important U-turns if ever needed.


How AIOps Is Transforming IT Operations Management

IT operations management has become increasingly challenging as networks have become larger and more complex, with the introduction of remote workers and the distribution of applications and workloads across networks. Traditional operations management tools and practices struggle to keep up with the ever-growing volumes of data from multiple sources within complex and varied network environments. AIOps was designed to bring the speed, accuracy and predictive capabilities of AI technology to IT operations. AIOps provides contextually enriched, deep end-to-end, real-time insights that can be proactively acted upon, according to Forrester. AIOps solutions use real-time telemetry, developing patterns and historical operational data to perform real-time assessments of what is happening, whether it has happened before or not, what paths it might take, and what negative effects it might have on business operations. ... A "digitally mature" organization has a much better ROI on the AI investment. But because this is a "rolling target" and not static, an organization's IT infrastructure "must be able to adapt and change," Ramamoorthy said.


The cyber assault on healthcare: What the Change Healthcare breach reveals

Many security leaders report that they don’t have adequate resources to implement the needed security measures because they’re often competing with pricey life-saving medical equipment for the limited funds available to spend, Kim says. Furthermore, he says their complex technology environments can make applying and creating security in depth not only more challenging but more costly, too. That, in turn, makes it less likely for CISOs to get the resources they need. Security teams in healthcare also have more challenges in updating and patching systems, Riggi explains, as the sector’s need for 24/7 availability means organizations can’t easily go offline — if they can go offline at all — to perform needed work. Healthcare security leaders also have a rapidly expanding tech environment to secure, as both more partners and more patients with remote medical devices become part of the sector’s already highly interconnected environment, says Errol S. Weiss, chief security officer at Health-ISAC. Such expansion heightens the challenges, complexities and costs of implementing security controls as well as heightening the risks that a successful attack against one point in that web would impact many others.


Solar Power Installations Worldwide Open to Cloud API Bugs

"The issue we discovered lies in the cloud APIs that connect the hardware with the user," both on Solarman's platform and on Deye Cloud, says Bogdan Botezatu, director of threat Research and reporting at Bitdefender. "These APIs have vulnerable endpoints that allow an unauthorized third party to change settings or otherwise control the inverters and data loggers via the vulnerable Solarman and Deye platforms," he says. Bitdefender, for instance, found that the Solarman platform's /oauth2-s/oauth/token API endpoint would let an attacker generate authorization tokens for any regular or business accounts on the platform. "This means that a malicious user could iterate through all accounts, take over any of them and modify inverter parameters or change how the inverter interacts with the grid," Bitdefender said in its report. The security vendor also found Solarman's API endpoints to be exposing an excessive amount of information — including personally identifiable information — about organizations and individuals on the platform. 


6 hard truths of generative AI in the enterprise

“Not a week goes by without another new tool that is mind-blowing in its abilities and potential future impact,’’ agrees David Higginson, chief innovation officer and executive vice president of Phoenix Children’s Hospital. But right now genAI “can really only be executed by a small number of technology giants rather than being tinkered with at a local skunkworks level within a healthcare organization,’’ he says. “Therefore, it feels as if we are in a bit of a paused state waiting for established vendors to deliver mature solutions that can provide the tangible value we all anticipated.” ... The fundamental barriers to adopting genAI are the scarcity and cost of the hardware, power, and data needed to train models, Higginson says. “With such scarcity comes the need to prioritize which solutions have the broadest appeal to the population and can generate the most long-term revenue,’’ he says. ... While research and development continue to push the needle on what genAI can do, “we know that data is a critical aspect to enabling AI solutions and we also recognize that many organizations are uncovering the work it will take to build the right data foundations to support scaled AI deployments,” says Deloitte’s Rowan.


Investing in Capacity to Adapt to Surprises in Software-Reliant Businesses

A well-known and contrarian adage in the Resilience Engineering community is that Murphy's Law - "anything that can go wrong, will" - is wrong. What can go wrong almost never does, but we don't tend to notice that. People engaged in modern work (not just software engineers) are continually adapting what they’re doing, according to the context they find themselves in. They’re able to avoid problems in most everything they do, almost all of the time. When things do go "sideways" and an issue crops up they need to handle or rectify, they are able to adapt to these situations due to the expertise they have. Research in decision-making described in the article Seeing the invisible: Perceptual-cognitive aspects of expertise by Klein, G. A., & Hoffman, R. R. (2020) reveals that while demonstrations of expertise play out in time-pressured and high-consequence events (like incident response), expertise comes from experience with facing varying situations involved with "ordinary" everyday work. It is "hidden" because the speed and ease with which experts do ordinary work contrasts with how sophisticated the work is. 



Quote for the day:

"True leadership must be for the benefit of the followers, not the enrichment of the leaders." -- Robert Townsend

Daily Tech Digest - August 11, 2024

Three Tips For Tackling Software Complexity And Technical Debt With Architectural Observability

ObservabilitySoftware teams and engineering leaders face the critical challenge of managing complex architectures, preventing architectural drift and addressing technical debt effectively. Without a clear understanding of their application’s architecture and the ability to observe changes over time, teams risk increased complexity, reduced agility and potential market irrelevance. ... By identifying the root cause of architectural complexity and improving application modularity, teams can move faster to create more resilient, scalable and maintainable applications. Continuously observing software architecture offers a real-time understanding of how it evolves from release to release to make better decisions about the right architectural choices for their business. ... The fast pace of release cycles has resulted in architects and engineers being overburdened and unsure where to begin in untangling complex architectures. With architectural observability, teams get a clearer sense of where to start. They can prioritize ATD remediation based on their most significant pain points. By prioritizing tasks according to pain point importance, teams ensure they solve the most urgent problems first.


Managing Technology Debt: Practical Tips to Improve Your Codebase

Identifying and prioritizing areas needing attention is the first step in managing technical debt. Regular code reviews are a practical approach to identifying and addressing unintentional technology debt before it escalates. Factors to consider when prioritizing technical debt include its ability to impede development cycles, functionality, and user experience. Creating greater transparency around technical debt can be achieved by tracking and communicating it regularly. Practices that can help assess technical debt include involving stakeholders, conducting regular code reviews, and having discussions about debt metaphors. ... If the tech debt is too extensive, it may make more sense to migrate away by building or acquiring new technology. We’ve employed this strategy in situations where the existing codebase was too brittle to justify extensive refactoring. An underlying platform to sync security and data between new and old solutions is essential for this strategy to work. There is often a high upfront cost for this strategy, but it can be a powerful way to avoid significant refactoring and loss of revenue from a brittle yet operational product. 


Aligning Cultural and Technical Maturity in Data Science

While some organizations boast high technical maturity with sophisticated data science teams, they may struggle with adoption across their organization. Conversely, others may have a strong cultural inclination towards data-driven decision-making but lack the technical infrastructure to support it. For organizations that are culturally ready to integrate data science into their business but are technically nascent -- referred to as “aspiring” -- there are practical steps to build a robust data science presence. The key is to start small, focusing on foundational skills and gradually tackling more complex problems as the team matures. ... One effective strategy for embedding data science teams within the business is to ensure you prioritize a solid methodological foundation. You can then bring those methodologies to life with the use of technical packages. These are blocks of code or algorithms that can be reused across the organization. They ensure consistency in methodology and save time by preventing data scientists from reinventing the wheel. 


AI could be the breakthrough that allows humanoid robots to jump from science fiction to reality

The potential applications of humanoid robots are vast and varied. Early modern research in humanoid robotics focused on developing robots to operate in extreme environments that are dangerous and difficult for human operators to access. These include Nasa’s Valkyrie robot, designed for space exploration. However, we will probably first see commercial humanoid robots deployed in controlled environments such as manufacturing. Robots such as Tesla’s Optimus could revolutionise manufacturing and logistics by performing tasks that require precision and endurance. They could work alongside human employees, enhancing productivity and safety. ... While the technological potential of humanoid robots is undeniable, the market viability of such products remains uncertain. Several factors will influence their acceptance and success, including cost, reliability, and public perception. Historically, the adoption of new technologies often faces hurdles related to consumer trust and affordability. For Tesla’s Optimus to succeed commercially, it must not only prove its technical capabilities but also demonstrate tangible benefits that outweigh its costs.


Harness software intelligence to conquer complexity and drive innovation

In addition to the technical challenges, the high cognitive load associated with working on a complex application can profoundly impact your team’s morale and job satisfaction. When developers feel overwhelmed, lack control over their work, and are constantly firefighting issues, they experience a sense of chaos and diminished agency. This lack of agency can lead to increased levels of stress and burnout. The ultimate result is higher attrition rates, as team members seek out opportunities where they feel more in control of their work and can make a more meaningful impact. The consequences of high attrition rates in your development team can be far-reaching. Not only does it disrupt the continuity of your projects and slow down progress, but it also results in a loss of valuable institutional knowledge. When experienced developers leave the company, they take with them a deep understanding of the application’s history, quirks, and best practices. This knowledge gap can be difficult to bridge as new team members struggle to get up to speed and navigate the complex codebase, often taking months to become productive. 


Five critical questions to help you increase business resilience

Take time to explore with your technology and engineering leaders how much visibility they have into risks. What tools do they use? Are there any specific roles charged with monitoring or interpreting system data? Does the team have the right capabilities? Do they have the time to pay attention to existing system performance? ... Every organization has its own culture and processes. That means the way problems are addressed and incidents responded to will likely be unique — for better and worse. However, it’s essential that business leaders get to know these processes. Do your technology teams have the resources needed to respond quickly? Are organizational structures helping them move as they need to or hindering them? What metrics are in place for measuring incident response times — and how do we measure up at the moment? ... In short, talk to your technology leaders about how they’re working to achieve software and delivery excellence — are we following best practices? Are we making informed decisions about tools? Are we bringing security decisions to bear on software early in the development process? Again, trust and honesty are important here. No one wants to talk about their limitations and what they’re not currently doing. 


Copyright Office Calls for Federal Law to Combat Unauthorized Deepfakes

A spate of legislation is in progress to address unauthorized deepfakes, but these laws are fragmented, focusing on specific applications. For instance, the Deepfakes Accountability Act aims to safeguard national security from deepfakes and Tennessee’s ELVIS Act safeguards vocal rights of musicians. “The impact is not limited to a select group of individuals, a particular industry, or a geographic location,” the Copyright Office said in its report, urging the need for comprehensive legislation. The office contended that current legal remedies for those harmed by unauthorized digital replicas are insufficient and that existing federal laws are “too narrowly drawn to fully address the harm from today’s sophisticated digital replicas.” Among the recommendations for federal legislation on deepfakes, the Copyright Office suggested protecting all individuals, not just celebrities, from unauthorized digital replicas. The proposed law would establish a federal right that protects all individuals during their lifetimes from the knowing distribution of unauthorized digital replicas.


From Accidental to Intentional: Your Roadmap to Architectural Excellence

One place to start is by identifying the primary purpose of IT in the organization. We’ve experienced all sorts of responses when we propose this as a starting point. From quizzical looks to downright shock is common. Yet, when organizations really take a look at their own internal beliefs, there is a wide discrepancy in the view of purpose. ... A common discussion with our clients includes a session to understand the pain points that they experience. Importantly, we work to learn who experiences the pain. We find it common for decision makers to disproportionately feel a lesser amount of pain under its current architectural state. Understanding why decision-makers feel less pain is a critical part of these discussions. Your technical team likely faces challenges meeting deadlines and budgets beyond their control, often accumulating technical debt. Technical debt is often the result of working around architectural deficiencies to meet these deadlines and remain within budget. ... To build a culture of improvement, start by providing the space and resources your team needs to tackle these challenges head-on. 


LLM progress is slowing — what will it mean for AI?

To see the trend, consider OpenAI’s releases. The leap from GPT-3 to GPT-3.5 was huge, propelling OpenAI into the public consciousness. The jump up to GPT-4 was also impressive, a giant step forward in power and capacity. Then came GPT-4 Turbo, which added some speed, then GPT-4 Vision, which really just unlocked GPT-4’s existing image recognition capabilities. And just a few weeks back, we saw the release of GPT-4o, which offered enhanced multi-modality but relatively little in terms of additional power. ... Because as the LLMs go, so goes the broader world of AI. Each substantial improvement in LLM power has made a big difference to what teams can build and, even more critically, get to work reliably. Think about chatbot effectiveness. With the original GPT-3, responses to user prompts could be hit-or-miss. Then we had GPT-3.5, which made it much easier to build a convincing chatbot and offered better, but still uneven, responses. It wasn’t until GPT-4 that we saw consistently on-target outputs from an LLM that actually followed directions and showed some level of reasoning. We expect to see GPT-5 soon, but OpenAI seems to be managing expectations carefully. 


Empowering Efficient DevOps with AI + Automation

Today’s DevOps practitioners must contend with technological challenges that were unimaginable when the term was first coined during the inaugural DevOpsDays conference in 2009. Since then, technology and data have scaled at a record-breaking rate, with the total amount of data created globally projected to nearly triple between 2020 and 2025. The management of this explosion of data in turn requires DevOps teams to navigate multiple clouds, networks, emerging technologies and more to conduct day-to-day operations. These disparate environments also lead to increased complexity and limited observability and keep information siloed, creating several challenges. ... Fortunately, DevOps teams are learning that a more intelligent and automated approach to IT management can help overcome the above challenges and unlock more efficiency, quality and value for the organization. By establishing a more agile and AI-enabled approach to IT operations management, DevOps practitioners can not only cope and keep pace with the modern landscape but thrive and drive innovation amid these challenges. While there is no single blueprint, organizations should focus on a holistic approach to streamlining and automating IT operations in modern hybrid cloud environments. 



Quote for the day:

"Nothing in the world is more common than unsuccessful people with talent." -- Anonymous

Daily Tech Digest - August 10, 2024

What to Look for in a Network Detection and Response (NDR) Product

NDR's practical limitation lies in its focus on the network layer, Orr says. Enterprises that have invested in NDR also need to address detection and response for multiple security layers, ranging from cloud workloads to endpoints and from servers to networks. "This integrated approach to cybersecurity is commonly referred to as Extended Detection and Response (XDR), or Managed Detection and Response (MDR) when provided by a managed service provider," he explains. Features such as Intrusion Prevention Systems (IPS), which are typically included with firewalls, are not as critical because they are already delivered via other vendors, Tadmor says. "Similarly, Endpoint Detection and Response (EDR) is being merged into the broader XDR (Extended Detection and Response) market, which includes EDR, NDR, and Identity Threat Detection and Response (ITDR), reducing the standalone importance of EDR in NDR solutions." ... Look for vendors that are focused on fast, accurate detection and response, advises Reade Taylor, an ex-IBM Internet security systems engineer, now the technology leader of managed services provider Cyber Command. 


AI In Business: Elevating CX & Energising Employees

Using AI in CX certainly eases business operations, but it’s ultimately a win for the customer too. As AI collects, analyses, and learns from large volumes of data, it delivers new worlds of actionable insights that empower businesses to get personal with their customer journeys. In the past years, businesses have tried their best to personalise the customer experience – but working with a handful of generic personas only gets you so far. Today’s AI, however, has the power to unlock next-level insights that help businesses discover customers’ expectations, wants, and needs so they can create individualised experiences on a 1-2-1 level. ... In human resources, AI further presents opportunities to help employees. For example, AI can elevate standard on-the-job training by creating personalised learning and development programmes for employees. Meanwhile, AI can also help job hunters find opportunities they may have overlooked. For example, far too many jobseekers have valuable and transferable skills but lack the experience in the right business vertical to land a job. According to NIESR, 63% of UK graduates are mismatched in this way. 


The benefits and pitfalls of platform engineering

The first step of platform engineering is to reduce tool sprawl by making clear what tools should make up the internal developer platform. The next step is to reduce context-switching between these tools which can result in significant time loss. By using a portal as a hub, users can find all of the information they need in one place without switching tabs constantly. This improves the developer experience and enhances productivity. ... In terms of scale, platform engineering can help an organization to better understand their services, workloads, traffic and APIs and manage them. This can come through auto-scaling rules, load balancing traffic, using TTL in self-service actions, and an API catalog. ... Often, as more platform tools are added and as more microservices are introduced - things become difficult to track - and this leads to an increase in deploy failures, longer feature development/discovery times, and general fatigue and developer dissatisfaction because of the unpredictably of bouncing around different platform tools to perform their work. There needs to be a way to track what’s happening throughout the SDLC.Adoption - how (and is it possible) to force developers to change the way they work


The irreversible footprint: Biometric data and the urgent need for right to be forgotten

The absence of clear definitions and categorisations of biometric data within current legislation highlights the need for comprehensive frameworks that specifically define rules governing its collection, storage, processing and deletion. Established legislation like the Information Technology Act, which were supplemented by subsequent ‘Rules’ for various digital governance aspects, can be used as a precedent. For instance, the 2021 Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules were introduced to establish a robust complaint mechanism for social media and OTT platforms, addressing inadequacies in the Parent Act. To close the current regulatory loopholes, a separate set of rules governing biometric data under the Digital Personal Data Protection Act, 2023 should be considered. ... The ‘right to be forgotten’ must be a basic element of it, recognising people's sovereignty over their biometric data. Such focused regulations would not just bolster the safeguarding of biometric information, but also ensure compliance and accountability among entities handling sensitive data. Ultimately, this approach aims to cultivate a more resilient and privacy-conscious ecosystem within our dynamic digital landscape.


6 IT risk assessment frameworks compared

ISACA says implementation of COBIT is flexible, enabling organizations to customize their governance strategy via the framework. “COBIT, through its insatiable focus on governance and management of enterprise IT, aligns the IT infrastructure to business goals and maintains strategic advantage,” says Lucas Botzen, CEO at Rivermate, a provider of remote workforce and payroll services. “For governance and management of corporate IT, COBIT is a must,” says ... FAIR’s quantitative cyber risk assessment is applicable across sectors, and now emphasizes supply chain risk management and securing technologies such as internet of things (IoT) and artificial intelligence (AI), Shaw University’s Lewis says. Because it uses a quantitative risk management method, FAIR helps organizations determine how risks will affect their finances, Fuel Logic’s Vancil says. “This method lets you choose where to put your security money and how to balance risk and return best.” ... Conformity with ISO/IEC 27001 means an organization has put in place a system to manage risks related to the security of data owned or handled by the organization. The standard, “gives you a structured way to handle private company data and keep it safe,” Vancil says. 


Why is server cooling so important in the data center industry?

AI and other HPC sectors are continuing to drive up the power density of rack-mount server systems. This increased computer means increased power draw, which leads to increased heat generation. Removing that heat from the server systems in turn requires more power for high CFM (cubic feet per minute) fans. Liquid cooling technologies, including rack-level-cooling and immersion, can improve the efficiency of the heat removal from server systems, requiring less powerful fans. In turn, this can reduce the overall power budget of a rack of servers. When extrapolating this out across large sections of a data center footprint, the savings can add up significantly. When you consider some of the latest Nvidia rack offerings require 40KW or more, you can start to see how the power requirements are shifting to the extreme. For reference, it’s not uncommon for a lot of electronic trading co-locations to only offer 6-12KW racks, which are sometimes operated half-empty due to the servers requiring more power draw than the rack can provide. These trends are going to force data centers to adopt any technology that can reduce the power burden on not only their own infrastructure but also the local infrastructure that supplies them.


Cutting the High Cost of Testing Microservices

Given the high costs associated with environment duplication, it is worth considering alternative strategies. One approach is to use dynamic environment provisioning, where environments are created on demand and torn down when no longer needed. This method can help optimize resource utilization and reduce costs by avoiding the need for permanently duplicated setups. This can keep costs down but still comes with the trade-off of sending some testing to staging anyway. That’s because there are shortcuts that we must take to spin up these dynamic environments like using mocks for third-party services. This may put us back at square one in terms of testing reliability, that is how well our tests reflect what will happen in production. At this point, it’s reasonable to consider alternative methods that use technical fixes to make staging and other near-to-production environments easier to test on. ... While duplicating environments might seem like a practical solution for ensuring consistency in microservices, the infrastructure costs involved can be significant. By exploring alternative strategies such as dynamic provisioning and request isolation, organizations can better manage their resources and mitigate the financial impact of maintaining multiple environments.


The Cybersecurity Workforce Has an Immigration Problem

Creating a skilled immigration pathway for cybersecurity will require new policies. Chief among them is a mechanism to verify that applicants have relevant cybersecurity skills. One approach is allowing people to identify themselves by bringing forth previously unidentified bugs. This strategy is a natural way to prove aptitude and has the additional benefit of requiring no formal expertise or expensive testing. However, it would also require safe harbor provisions to protect individuals from prosecution under the Computer Fraud and Abuse Act. ... The West’s adversaries may also play a counterintuitive role in a cybersecurity workforce solution. Recent work from Eugenio Benincasa at ETH Zurich highlights the strength of China’s cybersecurity workforce. How many Chinese hackers might be tempted to immigrate to the West, if invited, for better pay and greater political freedom? While politically sensitive, a policy that allows foreign-trained cybersecurity experts to immigrate to the US could enhance the West’s workforce while depriving its adversaries of offensive talent. At the same time, such immigration programs must be measured and targeted to avoid adding tension to a world in which geopolitical conflict is already rising. 


Cross-Cloud: The Next Evolution in Cloud Computing?

The key difference between cross-cloud and multicloud is that cross-cloud spreads the same workload across-clouds. In contrast, multicloud simply means using more than one public cloud at the same time — with one cloud hosting some workloads and other clouds hosting other workloads. ... That said, in other respects, cross-cloud and multicloud offer similar benefits — although cross-cloud allows organizations to double down on some of those benefits. For instance, a multicloud strategy can help reduce cloud costs by allowing you to pick and choose from among multiple clouds for different types of workloads, depending on which cloud offers the best pricing for different types of services. One cloud might offer more cost-effective virtual servers, for example, while another has cheaper object storage. As a result, you use one cloud to host VM-based workloads and another to store data. You can do something similar with cross-cloud, but in a more granular way. Instead of having to devote an entire workload to one cloud or another depending on which cloud offers the best overall pricing for that type of workload, you can run some parts of the workload on one cloud and others on a different cloud. 


Will We Survive The Transitive Vulnerability Locusts

The issue today is that modern software development resembles constructing with Legos, where applications are built using numerous open-source dependencies — no one writes frameworks from scratch anymore. With each dependency comes the very real probability of inherited vulnerabilities. When unique applications are then built on top of those frameworks, it turns into a patchwork of potential vulnerability dependencies that are stitched together with our own proprietary code, without any mitigation of the existing vulnerabilities. ... With a proposed solution, it would be easy to conclude that we have fixed the problem. Given this vulnerability, we could just patch it and be secure, right? But after we updated the manifest file, and theoretically removed the transitive vulnerability, it still showed up in the SCA scan. After two tries at remediating the problem, we recognized that two variable versions were present. Using the SCA scan, we determined the root cause of the vulnerability had been imported and used. This is a fine manual fix, but reproducing this process manually at scale is near-impossible. We therefore decided to test whether we could group CVE behavior by their common weakness enumeration (CWE) classification. 



Quote for the day:

"You are the only one who can use your ability. It is an awesome responsibility." -- Zig Ziglar

Daily Tech Digest - August 09, 2024

High-Performance IT Strategy Drives Business Value From AI

CIOs and technology leaders have always aimed to ensure IT-business alignment, but achieving it proved challenging. Forrester's research indicated that firms with strong alignment grew 2.4 times faster than their peers and were twice as profitable. "Businesses often do not state their requirements clearly, and IT leaders struggle to understand them," Sharma said. ... Business is based on trust, and companies that people trust can earn more loyalty and advocacy. Because technology is central to customer experience, trust is vital in HPIT. Forrester's data showed that people who trusted a company were 1.8 times more likely to recommend that company to friends and peers. "We have found that the companies that can create mutual trust between business and IT - and business and their customers - tend to outperform their peers in the market," Sharma said. ... Organizations need to keep pace with the rapid technological changes. The swift evolution of technology necessitated quick adaptation and scaling to meet unique and common business needs. "Alignment is ongoing. You need to change your technology skills, practices and even the technology itself," Sharma said.


How to train an AI-enabled workforce — and why you need to

Building an AI team is an evolving process, just as genAI itself is steadily evolving — even week to week. “First, it’s crucial to understand what the organization wants to do with AI,” Corey Hynes, executive chair and founder at IT training company Skillable, said in an earlier interview with Computerworld. “Second, there must be an appetite for innovation and dedication to it, and a strategy — don’t embark on AI efforts without due investment and thought. Once you understand the purpose and goal, then you look for the right team,” Hynes added. ... Corporate AI initiatives, Alba said, are similar to the shift that took place when the internet or cloud computing took hold, and there was “a sudden upskilling” in the name of productivity. Major technology market shifts also affect how employees think about their careers. “Am I getting the right development opportunities from my employer? Am I being upskilled?” Alba said. “How upfront are we about leveraging some of these innovations? Am I using a private LLM at my employer? If not, am I using some of the public tools, i.e. OpenAI and ChatGPT? How much on the cutting edge am I getting and how much are we innovating?”


Immutability in Cybersecurity: A Layer of Security Amidst Complexity and Misconceptions

An immutable server provides an environmental defense for the data it contains. It generally uses a stripped down operating system and configuration that does not allow, or severely limits, third-party access. Under such circumstances, any attempted access and any unusual activity is potentially malicious. Once configured, the server’s state is fixed – the software, configuration files, and data on the server cannot be modified directly. If this somehow does happen, the data contained can be burned, a new server with the same system configuration can be stood up, and fresh data from backup could be uploaded. It means, in theory, the immutable server could always be secure and contain the latest data. ... Immutable backup is a copy of data that cannot be altered, changed, or deleted. It is fundamentally some form of write once, read many times technology. Anthony Cusimano, director of technical marketing at Object First, provides more detail. “Immutable backup storage is a type of data repository where information cannot be modified, deleted, or overwritten for a set period. Most immutable storage targets are object storage and use an ‘object lock’ mechanism to prevent unintentional or deliberate alterations or deletions.”


CrowdStrike's Legal Pressures Mount, Could Blaze Path to Liability

Currently, the bar is so high for bringing a successful case against a software maker that most attorneys are disincentivized to even try, says Fordham's Sharma. "How these cases go will give us a lot of insight into how high are these barriers, what needs to be reformed," she says. "We don't have a lot of case law on this ... so this will be very exemplary in shedding light on exactly what the contours of those barriers are." The software liability landscape is currently pretty craggy. While simple on its surface — "software makers must be held responsible for insecure software" — even the question of who is responsible can quickly become complex, as the interplay between Delta Airlines, CrowdStrike, and Microsoft shows. Software liability legislation and regulations would have to solve this issue and many others, the Atlantic Council's Cyber Statecraft Initiative stated in a 32-page analysis published earlier this year. "Software security is a problem of 'shared responsibility': users of software, in addition to its developers, have significant control over cybersecurity outcomes through their own security practices," the report stated. 


Meet Prompt Poet: The Google-acquired tool revolutionizing LLM prompt engineering

Prompt Poet is a groundbreaking tool developed by Character.ai, a platform and makerspace for personalized conversational AIs, which was recently acquired by Google. Prompt Poet potentially offers a look at the future direction of prompt context management across Google’s AI projects, such as Gemini. ... Customizing an LLM application often involves giving it detailed instructions about how to behave. This might mean defining a personality type, a specific situation, or even emulating a historical figure.  Customizing an LLM application, such as a chatbot, often means giving it specific instructions about how to act. This might mean describing a certain type of personality type, situation, or role, or even a specific historical or fictional person. ... Data can be loaded in manually, just by typing it into ChatGPT. If you ask for advice about how to install some software, you have to tell it about your hardware. If you ask for help crafting the perfect resume, you have to tell it your skills and work history first. However, while this is ok for personal use, it does not work for development. Even for personal use, manually inputting data for each interaction can be tedious and error-prone.


The rise of the ‘machine defendant’ – who’s to blame when an AI makes mistakes?

One of the most obvious risks is that “bad actors” – such as organised crime groups and rogue nation states – use the technology to deliberately cause harm. This could include using deepfakes and other misinformation to influence elections, or to conduct cybercrimes en masse. ... Less dramatic, but still highly problematic, are the risks that arise when we entrust important tasks and responsibilities to AI, particularly in running businesses and other essential services. It’s certainly no stretch of the imagination to envisage a future global tech outage caused by computer code written and shipped entirely by AI. When these AIs make autonomous decisions that inadvertently cause harm – whether financial loss or actual injury – whom do we hold liable? ... Market forces are already driving things rapidly forward in artificial intelligence. To where, exactly, is less certain. It may turn out that the common law we have now, developed through the courts, is adaptable enough to deal with these new problems. But it’s also possible we’ll find current laws lacking, which could add a sense of injustice to any future disasters.


Making the gen AI and data connection work

Faced with insufficient datasets and the risk of training ML systems with copyrighted data, the challenges for today's CIOs span from privacy and security, to compliance and anonymization. So what can CIOs do beyond being vigilant about regulation and collaborating with fellow managers to help instill trust in AI? ... The real challenge, however, is to “demonstrate and estimate” the value of projects not only in relation to TCO and the broad-spectrum benefits that can be obtained, but also in the face of obstacles such as lack of confidence in tech aspects of AI, and difficulties of having sufficient data volumes. But these are not insurmountable challenges. ... Gartner agrees that synthetic data can help solve the data availability problem for AI products, as well as privacy, compliance, and anonymization challenges. Synthetic data can be generated to reflect the same statistical characteristics as real data, but without revealing personally identifiable information, thereby complying with privacy-by–design regulations and other sensitive details. 


What’s the Difference Between Observability and Monitoring?

Monitoring acts like a vigilant guard, constantly checking system health against predefined thresholds for signs of trouble. Its primary goal is to track the health and performance of systems based on established metrics and logs, like CPU utilization, memory usage, server response times or even application-specific data points. ... While monitoring excels at identifying deviations, observability aims to understand the system’s internal state by analyzing its external outputs. Similar to a detective looking at clues at a crime scene, observability gathers all available data (metrics, logs, traces and events) to not only identify the issue but also uncover its root cause. This holistic view allows teams to diagnose complex problems and anticipate potential breakdowns before they occur. ... Observability and monitoring are complementary rather than alternative practices. Monitoring is a vigilant guard while observability is a thoughtful analyst. Being able to react to some issues immediately while preventing others and making overall system improvements over time — the winning strategy combines both.


Microsoft Teams offers more for developers

Much of the new developer functionality comes from an updated JavaScript library: TeamsJS 2.0. As it offers a lot of backwards compatibility, older applications can be quickly ported to the latest release, adding support for Outlook as well as Teams. Some changes will need to be made, for example, updating code to support more modern JavaScript asynchronous capabilities. At the same time, there’s been a reorganization of the library’s APIs, grouping them by capability. Microsoft has updated its Visual Studio Code Teams Toolkit to help with application migrations. This automates the process of updating dependencies and app manifests, providing notifications of where you need to update interfaces and callbacks. It’s not completely automatic, but it does help you start making necessary changes. ... Another interesting developer feature is support for Mermaid, a JavaScript-based language that allows you to quickly add charts and diagrams. Again, this can be used collaboratively, enabling architects and other development team members to dynamically document code snippets, showing how they interact and what functionality they offer. 


Quantum Cryptography Has Everyone Scrambling

At the center of these varied cryptography efforts is the distinction between QKD and post-quantum cryptography (PQC) systems. QKD is based on quantum physics, which holds that entangled qubits can store their shared information so securely that any effort to uncover it is unavoidably detectable. Sending pairs of entangled-photon qubits to both ends of a network provides the basis for physically secure cryptographic keys that can lock down data packets sent across that network. ... Typically, quantum cryptography systems are built around photon sources that chirp out entangled photon pairs—where photon A heading down one length of fiber has a polarization that’s perpendicular to the polarization of photon B heading in the other direction. The recipients of these two photons perform separate measurements that enable both recipients to know that they and only they have the shared information transmitted by these photon pairs. ... By contrast, post-quantum cryptography (PQC) is based not around quantum physics but pure math, in which next-generation cryptographic algorithms are designed to run on conventional computers. 



Quote for the day:

"We get our power from the people we lead, not from our stars and our bars." -- J. Stanford

Daily Tech Digest - August 08, 2024

4 Common LCNC Security Vulnerabilities and How To Mitigate Them

While LCNC platforms allow access restrictions on the data, they are applied on the client side by default. Unfortunately, a user with access to the application can bypass these restrictions and gain unauthorized access to the underlying data sources. Citizen developers might not be aware of the risk associated with default settings when configuring access rules. This can cause an external breach if the application is accessible over the internet or a report is published on the web. ... Apps and automation created on LCNC platforms are not immune to traditional web application vulnerabilities such as SQL injection. Consider a form for collecting user complaints that can be exploited by injecting SQL code, allowing an attacker from the internet to retrieve sensitive data, including usernames and salaries, from the database. This vulnerability arises when developers include user input directly in SQL queries without proper parameterization. ... Citizen developers mistakenly use LCNC applications and automation to send sensitive data through personal emails, store corporate data insecurely in public network drives, and generate and distribute anonymous access links to corporate resources. 


EU’s DORA regulation explained: New risk management requirements for financial firms

The EU says that despite the financial sector’s increased reliance on IT firms, there is a lack of specific powers to address ICT risks arising from those third parties. The act puts critical ICT third-party service providers into the scope of regulators and subject them to an oversight framework at the EU level. “DORA continues the impetus over the past decade in outsourced and third-party governance,” says Chaudhry, “with a focus on chain outsourcing and resiliency, with clarity that critical ICT third-party providers, including cloud service providers, need to be within the regulatory perimeter.” Under these rules, European Supervisory Authorities (ESAs) would have the right to access documents, carry out inspections, and subject third parties to fines if deemed necessary. ... In an early analysis of the regulation, Deloitte said that most firms in the sector would welcome the introduction of an oversight framework as it will provide more legal certainty around what is permissible, a level of assurance on the security of their assets in the cloud, and likely increase firms’ confidence and appetite for transitioning some of their activities to the cloud.


No god in the machine: the pitfalls of AI worship

The problem of theodicy has been a topic of debate among theologians for centuries. It asks: if an absolutely good God is omniscient, omnipotent and omnipresent, how can evil exist when God knows it will happen and can stop it? It radically oversimplifies the theological issue, but theodicy, too, is in some ways a kind of logical puzzle, a pattern of ideas that can be recombined in particular ways. I don’t mean to say that AI can solve our deepest epistemological or philosophical questions, but it does suggest that the line between thinking beings and pattern recognition machines is not quite as hard and bright as we may have hoped. The sense of there being a thinking thing behind AI chatbots is also driven by the now common wisdom that we don’t know exactly how AI systems work. What’s called the black box problem is often framed in mystical terms – the robots are so far ahead or so alien that they are doing something we can’t comprehend. That is true, but not quite in the way it sounds. New York University professor Leif Weatherby suggests that the models are processing so many permutations of data that it is impossible for a single person to wrap their head around it. 


Critical AWS Vulnerabilities Allow S3 Attack Bonanza

The researchers first uncovered Bucket Monopoly, an attack method that can significantly boost the success rate of attacks that exploit AWS S3 buckets — i.e., online storage containers for managing objects, such as files or images, and resources required for storing operational data. The issue is that S3 storage buckets were designed to use predictable, easy-to-guess AWS account IDs instead of a unique identifier for each bucket name using a hash or qualifier. "Sometimes the only thing that an attacker needs to know about an organization is their public account ID for AWS, which is not considered sensitive data right now, but we recommend it is something that an organization should keep as a secret," Kadkoda says. To mitigate the issue, AWS changed the default configurations. "All of the services have been fixed by AWS in that they no longer create the bucket name automatically," he explains. "AWS now adds a random identifier or sequence number if the desired bucket name already exists." Security researchers and AWS customers have long debated whether AWS account IDs should be public or private. 


Data Ethics: New Frontiers in Data Governance

While morals concern subjective notions of good and bad, and laws concern the limits of what is socially acceptable, Aiken and Lopez define ethics as “the difference between what you have the right to do and what is the right thing to do.” Navigating that crucial difference is rarely cut and dried even in simple, day-to-day personal interactions. Still, within the world of data, ethical questions can quickly take on multiple dimensions and present challenges unique to the field. Assessing data ethics can be decidedly confusing, for as Lopez pointed out, “Not all things that are bad for data are actually bad for the world … and vice versa.” Whereas the ethical actions and judgments that we make as private individuals tend to play out within a limited set of factors, the implications of even the most innocuous events within large-scale data management can be huge. Company data exists in “space,” potentially flowing between departments and projects, but privacy agreements and other safeguards that apply for some purposes may not apply to others. Data from spreadsheets authored for in-house analytics, for example, might violate a client privacy agreement if it migrates to open cloud storage.


How network segmentation can strengthen visibility in OT networks

First, it’s crucial to have a comprehensive understanding of the data flow within the environment — knowing what information needs to move and where. Often, technical documentation about operational design is outdated or incomplete, missing details about current data flows and usage. Second, most visibility tools in this space require specific network configurations because traditional antivirus or endpoint protection software isn’t typically viable for these devices. Therefore, it’s necessary to have mechanisms for routing traffic to inspection points. Since many OT networks are designed for resilience and uptime rather than cybersecurity, reconfiguring them to enable traffic inspection can be challenging. Network segmentation projects are time-consuming, expensive, and may lead to operational downtime, which is usually unacceptable in OT environments. The visibility tool story requires the identification of legacy technologies which tend to run rampant in OT networks and won’t support the changes necessary to feed the tools. These can include unmanaged switches, network devices that don’t support RSPAN, and outdated or oversubscribed cabling infrastructure.


Is The AI Bubble About To Burst?

While it is said that AI could add around $15 trillion to the value of the global economy, recent earnings reports from the likes of Google and Tesla have been less than stellar, leading to the recent dips in share prices. At the same time, there are reports that the general public is becoming more distrustful of AI and that businesses are finding it difficult to make money from it. Does this mean that the AI revolution—touted as holding the solution to problems as diverse as curing cancer and saving the environment—is about to come crashing down around our ears? ... However, it's important to note that even these tech giants aren't immune to external pressures. The ongoing Google antitrust case, for instance, could have far-reaching implications not just for Google, but for other major players in the tech industry as well. Nvidia is already facing two separate antitrust probes from the U.S. Department of Justice, focusing on its acquisition of Run:ai and alleged anti-competitive practices in the AI chip market. These legal and regulatory challenges could potentially reshape the landscape for Big Tech's AI ambitions. It's also worth mentioning that while the established tech companies have diversified revenue streams, there are newer players like OpenAI and Anthropic that are primarily focused on AI. 


Overcoming Human Error in Payment Fraud: Can AI Help?

Scammers usually target accounts payable departments, which processes payments to suppliers and vendors. They typically pose as an existing supplier and send fraudulent invoices to an organization or even digitally gain access to a company's AP processes to authorize large payments, said Infosys. ... Accounts payable automation solutions can flag minute discrepancies in invoices, such as a new address or new bank account details, that manual process might miss. Alerts can prompt companies to follow up with their vendors to verify the legitimacy of invoices before processing payments. ... Businesses see the potential for AI to reduce fraud losses in B2B payments. Companies can use AI to examine historical data to identify patterns, detect anomalies and automate routine tasks such as data entry and calculations. They can use crowdsourced data from vendors to streamline processes and enhance trust. Technologies that provide end-to-end visibility of the entire B2B payment ecosystem offer a comprehensive view, helping detect and prevent issues arising from human errors. Some organizations have launched AI-based initiatives to fight fraud, but the it's too soon to see results. 


Post-quantum encryption: Crypto flexibility will prepare firms for quantum threat, experts say

For enterprises, there are two big challenges that come with quantum computers. First of all, we don’t know when the day will come when a quantum computer breaks classical encryption, making it hard to plan for. It would be tempting to put off solving the problem until the quantum computers are here – and then it will be too late. Second, there is the ‘collect now, decrypt later’ threat. Major intelligence agencies may be – and almost certainly are – collecting any and all data they can get their hands on, planning ahead for a future where they can decrypt it all. “They’ve been doing it forever,” Lyubashevsky says. ... One problem, he says, is that encryption is often buried deep inside code libraries and third-party products and services. Or fourth or fifth party. “You have to get a cryptographic bill of materials to discover the cryptography inside – and that’s not easy,” he says. And that’s just the first challenge. Once all the encryption is identified, it needs to be replaced with a modern, flexible system. And that’s not always possible if parts of the system that are beyond your control have older encrypted hard-coded.


Study backer: Catastrophic takes on Agile overemphasize new features

"Testing is kind of one of those tools that are there, but in order for testing to actually be able to work at all you need to know what you're testing. So you need good requirements to outline the non-functional requirements that are there." Such as reliability. "The interesting thing is that a lot of people, I think, in the Agile community, a lot of the Agile fundamentalists will argue that user stories are sufficient. These essentially just describe functional behavior, but they lack a generalizable specification or nonfunctional requirements." "And so I think that's one of the key flaws. When you end up looking at the most dogmatic application of Agile, we just have user stories, but you've lacked that generalizable specification." ... For software engineering, however, things are less rosy. He points to an interpretation of DevOps where issues don't really matter as long as the system recovers from them, and velocity and quality are never in conflict. "This has led to absolutely catastrophic outcomes in the past." However, it is organizational transformation, where a methodology and mindset branded as "Agile" is applied across a business, which is where the wheels can really come off. 



Quote for the day:

"Nobody who has ever given his best has regretted it." -- George Halas

Daily Tech Digest - August 07, 2024

Should You Buy or Build an AI Solution?

Training an AI model is not cheap; ChatGPT cost $10 million to train in its current form, while the cost to develop the next generation of AI systems is expected to be closer to $1 billion. Traditional AI tends to cost less than generative AI because it runs on fewer GPUs, yet even the smallest scale of AI projects can quickly reach a $100,000 price tag. Building an AI model should only be done if it’s expected that you will recoup building costs within a reasonable time horizon. ... The right partner will help integrate new AI applications into the existing IT environment and, as mentioned, provide the talent required for maintenance. Choosing an existing model tends to be cheaper and faster than building a new one. Still, the partner or vendor must be vetted carefully. Vendors with an established history of developing AI will likely have better data governance frameworks in place. Ask them about policies and practices directly to see how transparent they are. Are they flexible enough to make said policies align with yours? Will they demonstrate proof of their compliance with your organization’s policies? The right partner will be prepared to offer data encryption, firewalls, and hosting facilities to ensure regulatory requirements are met, and to protect company data as if it were their own.


Business Data Privacy Standards and the Impact of Artificial Intelligence on Data Governance

Artificial intelligence technologies, including machine learning and natural language processing, have revolutionized how businesses analyze and utilize data. AI systems can process vast amounts of information at unprecedented speeds, uncovering patterns and generating insights that drive strategic decisions and operational efficiencies. However, the use of AI introduces complexities to data governance. Traditional data governance practices focused on managing structured data within defined schemas. AI, on the other hand, thrives on vast swaths of information and can generate entirely new data. ... As AI continues to evolve, so too must data governance frameworks. Future advancements in AI technologies, such as federated learning and differential privacy, hold promise for enhancing data privacy while preserving the utility of AI applications. Collaborative efforts between businesses, policymakers, and technology experts are essential to navigate these complexities and ensure that AI-driven innovation benefits society responsibly. 


Foundations of Forensic Data Analysis

Forensic data analysis faces a variety of technical, legal, and administrative challenges. Technical factors that affect forensic data analysis include encryption issues, need for large amounts of disk storage space for data collection and analysis, and anti-forensics methods. Legal challenges can arise in forensic data analysis and can confuse or derail an investigation, such as attribution issues stemming from a malicious program capable of executing malicious activities without the user’s knowledge. These applications can make it difficult to identify whether cybercrimes were deliberately committed by a user or if they were executed by malware. The complexities of cyber threats and attacks can create significant difficulties in accurately attributing malicious activity. Administratively, the main challenge facing data forensics involves accepted standards and management of data forensic practices. Although many accepted standards for data forensics exist, there is a lack of standardization across and within organizations. Currently, there is no regulatory body that overlooks data forensic professionals to ensure they are competent and qualified and are following accepted standards of practice.


Closing the DevSecOps Gap: A Blueprint for Success

Businesses need to start at the top and ensure all DevSecOps team members accept a continuous security focus: Security isn't a one-time event; it's an ongoing process. Leaders must encourage open communication between development, security, and operation teams, which can be achieved with regular meetings and shared communication platforms that facilitate constant collaboration. Developers must learn secure coding practices when building their models, while security and operations teams need to better understand development workflows to create practical security measures. Peer-to-peer communication and training are about partnership, not conflict, and effective DevSecOps thrives on collaboration, not finger-pointing. Only once these personnel changes are implemented can a DevSecOps team successfully execute a shift left security approach and leverage the benefits of technology automation and efficiency. Once internal harmony is achieved, DevSecOps teams can begin consolidating automation and efficiency into their workflows by integrating security testing tools within the CI/CD pipelines. 


How micro-credentials can impact the world of digital upskilling in a big way

Micro-credentials, when correctly implemented, can complement traditional degree programmes in a number of ways. Take for example the Advance Centre, in partnership with University College Dublin, Technological University Dublin and ATU Sligo, which offers accredited programmes and modules with the intent of addressing Ireland’s future digital skill needs. “They enable students to gain additional skills and knowledge that supplement their professional field. For example, a mechanical engineer might pursue a micro-credential in cybersecurity or data analytics to enhance their expertise and employability,” said O’Gorman. By bridging very specific skills gaps, micro-credentials can cover materials that may otherwise not be addressed in more traditional degree programmes. “This is particularly valuable in fast-evolving fields where specific up-to-date skills are in high demand.” Furthermore, it is fair to say that balancing work, education and your personal life is no easy feat, but this shouldn’t mean that you have to compromise on your career aspirations. 


Eedge Data Center Supports Emerging Trends

Adopting AI technologies requires a lot of computational power, storage space and low-latency networking to be able to train and run models. These technologies prefer hosting environments, which makes them highly compatible with data centres, therefore, as the demand for AI grows, so will the demand for data centres. However, the challenge remains on limiting new data centres to connect to the grid, which will impact data centre build out. This highlights edge data centres as the solution to the data centre capacity problem.  ... With this pressure, cloud computing has emerged as a cornerstone for these modernisation efforts, with companies choosing to move their workloads and applications onto the cloud. This shift has brought challenges for companies relating to them managing costs and ensuring data privacy. As a result, organisations are considering cloud repatriation as a strategic option. Cloud repatriation is essentially the migration of applications, data and workloads from the public cloud environment back to on-premises or a colocated centre infrastructure.


How To Get Rid of Technical Debt for Good

“To get rid of it or minimize it, you should treat this problem as a regular task -- systematically. All technical debt should be precisely defined and fixed with a maximum description of the current state and expected results after the problem is solved,” says Zaporozhets. “As the next step, [plan] the activities related to technical debt -- namely, who, when, and how should deal with these problems. And, of course, regular time should be allocated for this, which means that dealing with technical debt should become a regular activity, like attending daily meetings.” ... Regularly addressing technical debt requires discipline, motivation and systematic behavior from all team members. “When the team stops being afraid of technical debt and starts treating it as a regular task, the pressure will lessen, and there will be a sense of control,” says Zaporozhets, “It's important not to put technical debt on hold. I teach my teammates that each team member must remember to take a systematic approach to technical debt and take initiative. When the whole team works together on this, they will realize that technical debt is not so scary, and controlling the backlog will become a routine task.”


New Orleans CIO Kimberly LaGrue Discusses Cyber Resilience

Cities are engrossed in the business of delivering services to constituents. But appreciating that a cyber interruption could knock down a city makes everyone think about that differently. In our cyberattack, we had the support of the mayor, the chief administrative officer and the homeland security office. The problem was elevated to those levels, and we were grateful that they appreciated the importance of the challenges. The most integral part of a good resilience strategy for government, especially for city government, is for city leaders to pay attention to it and buy into the idea that these are real threats, and they must be addressed? ... We learned of cyberattacks across the state through Louisiana’s fusion center. They were very active, very vocal about other threats. We gained a lot of insights, a lot of information, and they were on the ground helping those agencies to recover. The state had almost 200 volunteers in its response arsenal, led by the Louisiana National Guard and the state of Louisiana’s fusion center. During our cyberattack, the group of volunteers that was helping other agencies came from those events straight to New Orleans for our event.


How cyber insurance shapes risk: Ascension and the limits of lessons learned

As research has supported, simple cost-benefit conditions among victims incentivize immediate payment to cyber criminals unless perfect mitigation with backups is possible and so long as the ransom is priced to correspond with victim budgets. Any delay incurs unnecessary costs to victims, their clients, and — cumulatively — to the insurer. The result is the rapid payment posture mentioned above. The singular character of cyber risk for these companies also sets limits on the lessons that can be learned for the average CISO working to safeguard organizations across the vast majority of America’s private enterprise. ... CISOs across the board should support firmer discussions with the federal government about increasingly strict and even punitive rules for limiting the payout of criminal fees. Limiting criminal incident payouts would remove the incentives for consistent high-tempo strikes on major infrastructure providers, which the federal government could compensate for in the near term by providing better resources for Sector Risk Management Agencies and beginning to resolve the abnormal dynamics surrounding the insurer-critical infrastructure relationship.


Transform, don't just change: Palladium India’s Neha Zutshi

The world of work is evolving rapidly, and HR is at the forefront of this transformation. One of the biggest challenges we face is managing change effectively, as poorly planned and communicated changes often meet resistance and fail. To navigate this, organisations must build the capability to manage change quickly and efficiently. This involves fostering an agile, learning culture where adaptability is valued, and employees are encouraged to embrace new ways of working. Upskilling and reskilling are critical in this process, ensuring that our workforce remains relevant and equipped to handle emerging challenges. ... Technology and AI are pervasive, permeating every industry, and HR is no exception. Various aspects of AI, such as machine learning and digital systems, have streamlined HR processes and automate mundane tasks. However, even though there are early adopter advantages, it is crucial to assess the need and risks related to adopting innovative HR technologies. Policy and ethical considerations must be addressed when adopting these technologies. Clear policies governing confidentiality, fairness, and accuracy are essential to ensure a smooth transition.



Quote for the day:

"Successful and unsuccessful people do not vary greatly in their abilities. They vary in their desires to reach their potential." -- John Maxwell