Showing posts with label Technology Ethics. Show all posts
Showing posts with label Technology Ethics. Show all posts

Daily Tech Digest - November 05, 2025


Quote for the day:

"Effective leaders know that resources are never the problem; it's always a matter of resourfulness." -- Tony Robbins



AI web browsers are cool, helpful, and utterly untrustworthy

AI browsers can and do interact with everything on a web page: summarizing content, reading emails, composing posts, looking at images, etc., etc. Every element on the page, whether you can see it or not, can hide an attack. A hacker can embed clipboard manipulations or other hacks that traditional browsers would never, not ever, execute automatically. ... AI browser agents can be tricked by hidden instructions embedded in websites via invisible text, images, scripts, or, believe it or not, bad grammar. Your eyes might glaze over at a long run-on sentence, but your AI web browser will read it all, including instructions for an attack hidden in plain sight within it. Such malicious commands are read and executed by the AI. This can lead to exposure of sensitive data, such as emails, authentication tokens, and login details, or triggering unwanted actions, including sending emails, posting to social media, or giving your computer a bad case of malware. ... Privacy is pretty much lost these days anyway, but with AI web browsers, we’ll have all the privacy of a goldfish in a bowl. Since AI browsers monitor our every last move, they process much more granular personal information than conventional browsers. Worrying about cookies and privacy is so 1990s. AI browsers track everything. This is then used to create highly detailed behavioral profiles. What? You didn’t know that AI browsers have built-in memory functions that retain your interactions, browser history, and content from other apps? How do you think they do what they do? Intuition? ESP?


AI can flag the risk, but only humans can close the loop

Companies embedding AI into vendor risk processes need governance structures that ensure transparency, accountability, and compliance. This includes maintaining an approved sources catalogue and requiring either the system or an analyst to validate findings and document the rationale behind them. Data minimization should be built into the design by defining what information is always in scope, such as sanctions or embargo lists, and what is contextually relevant, while excluding protected or sensitive attributes under GDPR and configuring AI to ignore them. Risk assessments should be tiered, calibrating the depth of checks to supplier criticality and geography to avoid unnecessary data collection for low-risk relationships while expanding scope for high-risk scenarios. Human accountability remains essential, with a named individual owning due diligence decisions while AI provides recommendations without replacing human judgment ... Regulators are likely to allow AI use if firms establish strong controls and demonstrate effective oversight, as required by frameworks like the EU AI Act. Responsibility remains with individuals or organizations; liability does not transfer to AI itself. While regulators may struggle to specify detailed technical rules, one clear shift is that “the data volume was too large to review” will no longer be an acceptable defense.


10 top devops practices no one is talking about

“A key, yet overlooked, devops practice is building true shared ownership, which means more than just putting teams in the same chat room,” says Chris Hendrich, associate CTO of AppMod at SADA. “It requires making production reliability and performance a primary success indicator for development, not solely an operational concern. This shared accountability is what builds the organizational competency of creating better, more resilient products.” ... “Baking an integrated code quality and code security approach into your devops workflow isn’t just good practice, it’s essential and a game-changer,” says Donald Fischer, VP at Sonar. “Tackling security alongside quality from day one isn’t merely about early bug detection; it’s about building fundamentally stronger, more trustworthy, and resilient software that is secure by design.” ... “Open source is a no-brainer for developers, but as the ecosystem grows, so do the risks of malware, unsafe AI models, license issues, outdated packages, poor performance, and missing features,” says Mitchell Johnson, CPDO of Sonatype. “Modern devops teams need visibility into what’s getting pulled in, not just to stay secure and compliant, but to make sure they’re building with high-quality components.” ... “Version-controlling database schemas and configurations across development, QA, and production is a quietly powerful devops practice,” says McMillan. 


Cloud Identity Exposure Is 'a Critical Point of Failure'

Attackers keep targeting cloud-based identities to help them bypass endpoint and network defenses, says an August report from cybersecurity firm CrowdStrike. That report counts a 136% increase in cloud intrusions over the preceding 12 months, plus a 40% year-on-year increase in cloud intrusions tied to threat actors likely working for the Chinese government. "The cloud is a priority target for both criminals and nation-state threat actors," said Adam Meyers, head of counter adversary operations at CrowdStrike ... One challenge is that enough cloud identities justify elevated permissions, putting organizations at elevated risk when their credentials are exposed. Take security operations centers and incident response teams. In general, while "the principle of least privilege and minimal manual access" is a best practice, first responders often need immediate and "necessary access," says an August report from Darktrace. "Security teams need access to logs, snapshots and configuration data to understand how an attack unfolded, but giving blanket access opens the door to insider threats, misconfigurations and lateral movement." Rather than always allowing such access, experts recommend using tools that only provide it when needed, for example, through Amazon Web Services' Security Token Service. "Leveraging temporary credentials, such as AWS STS tokens, allows for just-in-time access during an investigation" that can be automatically revoked after, which "reduces the window of opportunity for potential attackers to exploit elevated permissions," Darktrace said.


How Software Development Teams Can Securely and Ethically Deploy AI Tools

Clearly, there is a danger that teams will trust AI too much, as these tools lack a command of the often nuanced context to recognize complex vulnerabilities. They may not fully grasp an application’s authentication or authorization framework, potentially leading to the omission of critical checks. If developers reach a state of complacency in their vigilance, the potential for such risks will only increase. ... Beyond security, team leaders and members must focus more on ethical and even legal considerations: Nearly one-half of software engineers are facing legal, compliance and ethical challenges in deploying AI, according the The AI Impact Report 2025 from LeadDev. The ethical/legal scenarios can take on a highly perplexing nature: A human engineer can read, learn from and write original code from an open-source library. But if an LLM does the same thing, it can be accused of engaging in derivative practices. What’s more, the current legal picture is a murky work in progress. Given the still-evolving judicial conclusions and guidelines, those using third-party AI tools need to ensure they are properly indemnified from potential copyright infringement liability, according to Ropes & Gray, a global law firm that advises clients on intellectual property and data matters. “Risk allocation in contracts concerning or contemplating AI models should be approached very carefully,” according to the firm.


How AI is Revolutionising RegTech and Compliance

Traditional approaches are failing, overwhelmed by increasing regulatory complexity and cross-border requirements. Enter RegTech: a technological revolution transforming how institutions manage regulatory obligations. Advanced artificial intelligence systems now predict compliance breaches weeks before they occur, while blockchain platforms create tamper-proof audit trails that streamline regulatory examinations. ... Natural language processing interprets complex regulatory documents automatically, updating compliance procedures within minutes of regulatory changes. Smart contracts execute compliance actions without human intervention, ensuring consistent adherence to evolving requirements. Leading institutions are achieving remarkable results. Barclays reduced regulatory document processing time from days to minutes using AI-powered analysis. JPMorgan's blockchain settlement system maintains compliance across multiple jurisdictions simultaneously. ... Regulatory-as-a-Service models are democratising access to sophisticated compliance capabilities. Smaller institutions can now access enterprise-grade RegTech through subscription services, reducing compliance costs by up to 50% whilst improving regulatory coverage. Challenges remain significant. Data privacy concerns intensify as compliance systems process vast quantities of sensitive information. Regulatory fragmentation across jurisdictions complicates platform development. 


CEOs Go All-In on AI, But Talent Isn't Ready

Despite the enthusiasm for AI, workforce readiness is still a critical concern. Approximately 74% of Indian CEOs see AI talent readiness as a determinant of their company's future success, yet 34% admit to a widening skills gap. This talent gap is multifaceted; it's not only technical proficiency that's in short supply, but also expertise in blending data science with ethics, regulatory understanding and business acumen. About 26% struggle to find candidates who balance technical skill with collaboration capabilities. ... Regulatory uncertainty still weighs heavily on CEOs' minds, with nearly half of Indian CEOs awaiting clearer regulatory guidance before pushing bold innovation initiatives, compared to only 39% globally. This cautious stance underlines a pragmatic approach to integrating AI amid evolving governance landscapes. About 76% of Indian CEOs worry that slow AI regulation progress could hinder organizational success. Ethical concerns also loom large: 62% of Indian CEOs cite them as significant barriers, slightly higher than the 59% global average, underscoring the importance of embedding trust and governance frameworks alongside technological investments. "This is why culture and leadership are very important. The board of directors must have a degree of AI literacy. There must be psychological safety in the organization. Employees must feel safe and if there's clear governance, it means there is a proactive suggestion to use sanctioned AI that meets security requirements," John Barker


Powering financial services innovation: The critical role of colocation

As AI continues to evolve, its impact on financial services is becoming both broader and deeper – moving beyond high-level innovation into the operational core of the enterprise. Today’s financial institutions face a dual mandate: to accelerate AI adoption in pursuit of competitive advantage, and to do so within the constraints of an increasingly complex digital and regulatory environment. From risk modelling and fraud prevention to real-time analytics and customer personalization, AI is being embedded into mission-critical functions. Realising its full potential, however, isn't solely a matter of algorithms – it hinges on having a data-first strategy, with the right infrastructure and governance in place. ... With exponential data growth presenting challenges, customers gain access to a secure, compliant, resilient, and performant foundation. This foundation enables the implementation of new technologies and seamless orchestration of data flows. Our goal is to simplify data management complexity and serve as the single, trusted, global data center partner for our customers. As organizations optimize their AI strategies, many are exploring cloud repatriation – the process of moving certain workloads from the cloud back to on-premises or colocation environments. This strategic move can be crucial for AI success, as it allows for better control over sensitive data, reduced latency, and improved performance for demanding AI workloads.


Measuring, Reporting, and Improving: Making Resilience Tangible and Accountable

A continuity plan sitting on a shelf provides little assurance of resilience. What matters is whether organizations can demonstrate their strategies work, they are tested, and corrective actions are tracked. Measurement transforms resilience from an abstract concept into quantifiable performance. ... Metrics ensure resilience is not left to chance or anecdote. They provide boards and regulators with evidence of progress, reinforcing accountability at the executive and governance levels. A resilience strategy that cannot be measured cannot be trusted. ... The first step in strengthening measurement is to define resilience key performance indicators (KPIs) and key risk indicators (KRIs). These metrics should evaluate outcomes rather than simply tracking activities, ensuring performance reflects actual readiness. ... Measurement alone is not enough without transparency. Organizations must establish reporting practices that make resilience performance visible to boards, regulators, and, when appropriate, customers. Sharing outcomes openly not only demonstrates accountability but also builds trust and credibility. ... One challenge organizations often encounter when measuring resilience is metric overload. In the effort to capture every detail, leaders may track too many indicators, creating complexity that dilutes focus and makes it difficult to interpret results. 


Bridging the Gap: Why DevOps Teams Are Quietly Becoming the Front Line of Security

For experienced DevOps practitioners, the idea of shifting security left isn't new. Static analysis in CI/CD pipelines, dependency scanning, and Infrastructure as Code (IaC) validation have become the norm. What's changed more recently is the pressure to respond to security events operationally, in addition to preventing them during builds. DevOps teams are adjusting in very real ways. Many are building security context into their logging practices, ensuring that logs are structured for debugging, and also for investigation and audit. Others are automating triage for security alerts using the same mindset they've applied to performance monitoring and deployment pipelines. Perhaps most importantly, DevOps teams are often the first to respond when something unusual shows up in system logs or access patterns. ... Security can be a shared responsibility across teams as long as boundaries and expectations are set. DevOps teams are defining their role in security more clearly by, for example, determining what gets logged, what counts as an anomaly, and who owns the investigation. They're also setting expectations around incident escalation, CVE response timeframes, and compliance requirements. When these lines are clear, security becomes an integrated part of the workflow instead of an extra burden. ... For many DevOps teams, security is part of the daily reality. It comes as a series of small, increasingly frequent interruptions.

Daily Tech Digest - July 10, 2025


Quote for the day:

"Strive not to be a success, but rather to be of value." -- Albert Einstein


Domain-specific AI beats general models in business applications

Like many AI teams in the mid-2010s, Visma’s group initially relied on traditional deep learning methods such as recurrent neural networks (RNNs), similar to the systems that powered Google Translate back in 2015. But around 2020, the Visma team made a change. “We scrapped all of our development plans and have been transformer-only since then,” says Claus Dahl, Director ML Assets at Visma. “We realized transformers were the future of language and document processing, and decided to rebuild our stack from the ground up.” ... The team’s flagship product is a robust document extraction engine that processes documents in the countries where Visma companies are active. It supports a variety of languages. The AI could be used for documents such as invoices and receipts. The engine identifies key fields, such as dates, totals, and customer references, and feeds them directly into accounting workflows. ... “High-quality data is more valuable than high volumes. We’ve invested in a dedicated team that curates these datasets to ensure accuracy, which means our models can be fine-tuned very efficiently,” Dahl explains. This strategy mirrors the scaling laws used by large language models but tailors them for targeted enterprise applications. It allows the team to iterate quickly and deliver high performance in niche use cases without excessive compute costs.


The case for physical isolation in data centre security

Hardware-enforced physical isolation is fast becoming a cornerstone of modern cybersecurity strategy. These physical-layer security solutions allow your critical infrastructure – servers, storage and network segments – to be instantly disconnected on demand, using secure, out-of-band commands. This creates a last line of defence that holds even when everything else fails. After all, if malware can’t reach your system, it can’t compromise it. If a breach does occur, physical segmentation contains it in milliseconds, stopping lateral movement and keeping operations running without disruption. In stark contrast to software-only isolation, which relies on the very systems it seeks to protect, hardware isolation remains immune to tampering. ... When ransomware strikes, every second counts. In a colocation facility, traditional defences might flag the breach, but not before it worms its way across tenants. By the time alerts go out, the damage is done. With hardware isolation, there’s no waiting: the compromised tenant can be physically disconnected in milliseconds, before the threat spreads, before systems lock up, before wallets and reputations take a hit. What makes this model so effective is its simplicity. In an industry where complexity is the norm, physical isolation offers a simple, fundamental truth: you’re either connected or you’re not. No grey areas. No software dependency. Just total certainty.


Scaling without outside funding: Intuitive's unique approach to technology consulting

We think for any complex problem, a good 60–70% of it can be solved through innovation. That's always our first principle. Then where we see any inefficiencies; be it in workflows or process, automation works for the other 20% of the friction. The remaining 10–20% is where the engineering plays its important role, and it allows to touch on the scale, security and governance aspects. In data specifically, we are referencing the last 5–6 years of massive investments. We partner with platforms like Databricks and DataMiner and we've invested in companies like TESL and Strike AI for securing their AI models. ... In the cloud space, we see a shift from migration to modernisation (and platform engineering). Enterprises are focussing on modernisation of both applications and databases because those are critical levers of agility, security, and business value. In AI it is about data readiness; the majority of enterprise data is very fragmented or very poor quality which makes any AI effort difficult. Next is understanding existing processes—the way work is done at scale—which is critical for enabling GenAI. But the true ROI is Agentic AI—autonomous systems which don’t just tell you what to do, but just do it. We’ve been investing heavily in this space since 2018. 


The Future of Professional Ethics in Computing

Recent work on ethics in computing has focused on artificial intelligence (AI) with its success in solving problems, processing large amounts of data, and with the award of Nobel Prizes to AI researchers. Large language models and chatbots such as ChatGPT suggest that AI will continue to develop rapidly, acquire new capabilities, and affect many aspects of human existence. Many of the issues raised in the ethics of AI overlap previous discussions. The discussion of ethical questions surrounding AI is reaching a much broader audience, has more societal impact, and is rapidly transitioning to action through guidelines and the development of organizational structure, regulation, and legislation. ... Ethics of digital technologies in modern societies raises questions that traditional ethical theories find difficult to answer. Current socio-technical arrangements are complex ecosystems with a multitude of human and non-human stakeholders, influences, and relationships. The questions of ethics in ecosystems include: Who are members? On what grounds are decisions made and how are they implemented and enforced? Which normative foundations are acceptable? These questions are not easily answered. Computing professionals have important contributions to make to these discussions and should use their privileges and insights to help societies navigate them.


AI Agents Vs RPA: What Every Business Leader Needs To Know

Technically speaking, RPA isn’t intelligent in the same way that we might consider an AI system like ChatGPT to mimic some functions of human intelligence. It simply follows the same rules over and over again in order to spare us the effort of doing it. RPA works best with structured data because, unlike AI, it doesn't have the ability to analyze and understand unstructured data, like pictures, videos, or human language. ... AI agents, on the other hand, use language models and other AI technologies like computer vision to understand and interpret the world around them. As well as simply analyzing and answering questions about data, they are capable of taking action by planning how to achieve the results they want and interacting with third-party services to get it done. ... Using RPA, it would be possible to extract details about who sent the mail, the subject line, and the time and date it was sent. This can be used to build email databases and broadly categorize emails according to keywords. An agent, on the other hand, could analyze the sentiment of the email using language processing, prioritize it according to urgency, and even draft and send a tailored response. Over time, it learns how to improve its actions in order to achieve better resolutions.


How To Keep AI From Making Your Employees Stupid

Treat AI-generated content like a highly caffeinated first draft – full of energy, but possibly a little messy and prone to making things up. Your job isn’t to just hit “generate” and walk away unless you enjoy explaining AI hallucinations or factual inaccuracies to your boss (or worse, your audience). Always, always edit aggressively, proofread and, most critically, fact-check every single output. This process isn’t just about catching AI’s mistakes; it actively engages your critical thinking skills, forcing you to verify information and refine expression. Think of it as intellectual calisthenics. ... Don’t settle for the first answer AI gives you. Engage in a dialogue. Refine your prompts, ask follow-up questions, request different perspectives and challenge its assumptions. This iterative process of refinement forces you to think more clearly about your own needs, to be precise in your instructions, and to critically evaluate the nuances of the AI’s response. ... The MIT study serves as a crucial wake-up call: over-reliance on AI can indeed make us “stupid” by atrophying our critical thinking skills. However, the solution isn’t to shun AI, but to engage with it intelligently and responsibly. By aggressively editing, proofreading and fact-checking AI outputs, by iteratively refining prompts and by strategically choosing the right AI tool for each task, we can ensure AI serves as a powerful enhancer, not a detrimental crutch.


What EU’s PQC roadmap means on the ground

The EU’s PQC roadmap is broadly aligned with that from NIST; both advise a phased migration to PQC with hybrid-PQC ciphers and hybrid digital certificates. These hybrid solutions provide the security promises of brand new PQC algorithms, whilst allowing legacy devices that do not support them, to continue using what’s now being called ‘classical cryptography’. In the first instance, both the EU and NIST are recommending that non-PQC encryption is removed by 2030 for critical systems, with all others following suit by 2035. While both acknowledge the ‘harvest now, decrypt later’ threat, neither emphasise the importance of understanding the cover time of data; nor reference the very recent advancements in quantum computing. With many now predicting the arrival of cryptographically relevant quantum computers (CRQC) by 2030, if organizations or governments have information with a cover time of five years or more, it is already too late for many to move to PQC in time. Perhaps the most significant difference that EU organizations will face compared to their American counterparts, is that the European roadmap is more than just advice; in time it will be enforced through various directives and regulations. PQC is not explicitly stated in EU regulations, although that is not surprising.


The trillion-dollar question: Who pays when the industry’s AI bill comes due?

“The CIO is going to be very, very busy for the next three, four years, and that’s going to be the biggest impact,” he says. “All of a sudden, businesspeople are starting to figure out that they can save a ton of money with AI, or they can enable their best performers to do the actual job.” Davidov doesn’t see workforce cuts matching AI productivity increases, even though some job cuts may be coming. ... “The costs of building out AI infrastructure will ultimately fall to enterprise users, and for CIOs, it’s only a question of when,” he says. “While hyperscalers and AI vendors are currently shouldering much of the expense to drive adoption, we expect to see pricing models evolve.” Bhathena advises CIOs to look beyond headline pricing because hidden costs, particularly around integrating AI with existing legacy systems, can quickly escalate. Organizations using AI will also need to invest in upskilling employees and be ready to navigate increasingly complex vendor ecosystems. “Now is the time for organizations to audit their vendor agreements, ensure contract flexibility, and prepare for potential cost increases as the full financial impact of AI adoption becomes clearer,” he says. ... Baker advises CIOs to be careful about their purchases of AI products and services and tie new deployments to business needs.


Multi-Cloud Adoption Rises to Boost Control, Cut Cost

Instead of building everything on one platform, IT leaders are spreading out their workloads, said Joe Warnimont, senior analyst at HostingAdvice. "It's no longer about chasing the latest innovation from a single provider. It's about building a resilient architecture that gives you control and flexibility for each workload." Cost is another major factor. Even though hyperscalers promote their pay-as-you-go pricing, many enterprises find it difficult to predict and manage costs at scale. This is true for companies running hundreds or thousands of workloads across different regions and teams. "You'd think that pay-as-you-go would fit any business model, but that's far from the case. Cost predictability is huge, especially for businesses managing complex budgets," Warnimont said. To gain more control over pricing and features, companies are turning to alternative cloud providers, such as DigitalOcean, Vultr and Backblaze. These platforms may not have the same global footprint as AWS or Azure but they offer specialized services, better pricing and flexibility for certain use cases. An organization needing specific development environments may go to DigitalOcean. Another may chose Vultr for edge computing. Sometimes the big players just don't offer what a specific workload requires. 


How CISOs are training the next generation of cyber leaders

While Abousselham champions a personalized, hands-on approach to developing talent, other CISOs are building more formal pathways to support emerging leaders at scale. For others like PayPal CISO Shaun Khalfan, structured development was always part of his career. He participated in formal leadership training programs offered by the Department of Defense and those run by the American Council for Technology. ... Structured development is also happening inside companies like the insurance brokerage firm Brown & Brown. CISO Barry Hensley supports an internal cohort program designed to identify and grow emerging leaders early in their careers. “We look at our – I’m going to call it newer or younger – employees,” he explains. “And if you become recognized in your first, second, or third year as having the potential to [become a leader], you get put in a program,” he explains. ... Khalfan believes good CISOs should be able to dive deep with engineers while also leading boardroom conversations. “It’s been a long time since I’ve written code,” he says, “but I at least understand how to have a deep conversation and also be able to have a board discussion with someone.” Abousselham agrees that technical experience is only one part of the puzzle. 

Daily Tech Digest - November 27, 2024

Cybersecurity’s oversimplification problem: Seeing AI as a replacement for human agency

One clear solution to the problem of technology oversimplification is to tailor AI training and educational initiatives towards diverse endpoints. Research clearly demonstrates that know-how of the underlying functions of security professions has a real mediating effect on the excesses of encountering disruptive, unfamiliar conditions. The mediation of this effect by the oversimplification mentality, unfortunately, suggests that more is required. Specifically, discussion of the foundational functionality of AI systems needs to be married to as many diverse outcomes as possible to emphasize the dynamism of the technology. ... Naturally, one of the value propositions of studies like the one presented here is the ability for professionals to see the world as another kind of professional might. Whilst tabletop exercises are already a core tool of the cybersecurity profession, there are opportunities to incorporate comparative applications’ learning for AI using simple simulations. ... Finally, wherever possible, role rotation is of clear advantage to overcoming the issues illustrated herein. In testing, the diversity of career roles over and above career length played a similar role in mitigating the excesses of the impact of novel conditions on response priorities.


How to Create an Accurate IT Project Timeline

Building resilient project plans that can handle unforeseen, yet often inevitable changes, is key to ensuring timeline accuracy. "Understanding dependencies, identifying bottlenecks, and planning delivery around these constraints have shown to be important for timeline accuracy," Chandrasekar says. Project accuracy also depends on clear communication and tracking. "It's critical to consistently review timelines with your project team and stakeholders, making updates as new information is discovered," Naqib says. He adds that project timelines should be tracked with the support of a work management tool, such as SmartSheet or Jira, in order to measure progress and identify gaps. Yet even with perfect planning, unanticipated delays or changes may occur. Proper planning and communication are key to assuring timeline accuracy, says Anne Gee, director of delivery excellence for IT managed services at data and technology consulting firm Resultant. ... The best way to get a lagging timeline back on schedule is to work with your project team to identify the root cause, Naqib advises. "Then, you can work with your team and your greater organization to explore possible resolution accelerators that will keep your timeline on track."


Shaping the Future of AI Benchmarking – Trends & Challenges

AI benchmarking serves as a foundational tool for evaluating and advancing artificial intelligence systems. Its primary objectives address critical aspects of AI development, ensuring that models are efficient, effective, and aligned with real-world needs. ... Benchmarks provide valuable insights into a model’s limitations, serving as a roadmap for enhancement. For instance: Identifying Bottlenecks: If a model struggles with inference speed or accuracy on specific data types, benchmarks highlight these areas for targeted optimization. Algorithm Development: Benchmarks inspire innovation by exposing gaps in performance, encouraging the development of new algorithms or architectural designs. Data Quality Assessment: Poor performance on benchmarks may indicate issues with training data, prompting better preprocessing, augmentation, or dataset refinement techniques. ... AI benchmarking involves a systematic process to evaluate the performance of AI models using rigorous methodologies. These methodologies ensure that assessments are fair, consistent, and meaningful, enabling stakeholders to make informed decisions about model performance and applicability.


Why data is the hottest commodity in cybersecurity

“The value of data has skyrocketed in recent years, transforming it into one of the most sought-after commodities in the digital age. The rise of AI and machine learning has only amplified the threat to data, as attackers can now automate their efforts and create more sophisticated and targeted campaigns.” Saceanu noted that Irish organisations, like those globally, are struggling to secure their systems and private information, with industries that typically hold sensitive data, such as those in healthcare, finance and education, being particularly vulnerable. “We have seen a massive focus on targeting organisations that operate in critical infrastructure for various motivations – financially oriented or to disrupt operations. This means that there are more and more ransomware attacks on manufacturing, energy and healthcare that are not only encrypting data, but also exfiltrating this data to ask for enormous ransom payments because they know that these organisations cannot afford any disruption.” For Saceanu, this shift to an environment driven by data and under near constant threat has led organisations to experiment with advanced technologies such as AI in order to improve efficiency and spearhead innovation


Proper ID Verification Requires Ethical Technology

When it comes to identity security, security teams should regularly monitor, identify, analyze, and report risks in their environment. If exploited, these risks can be detrimental to an organization, its assets, and stakeholders. They can also undercut ethical standards of privacy and data protection. Running risk assessments is especially important when there is a lack of visibility in company processes and security gaps. Organizations can systematically assess their security measures surrounding user identity data and ensure compliance with privacy policies and regulatory standards. ... Transparency is among the most vital aspects of ethical identity verification. It requires organizations to be upfront about how they practice data collection and management, and how the data is used. This has to be reflected in the company policies, culture, and of course, its technology, including data storage and access. Users, i.e., customers from whom data is collected, should be able to access the policy terms easily at any point. ... When companies are looking to procure ethical technology, it’s important to account for factors like privacy, accessibility, security, and regulations. The above factors look at the perspective of the company using the tech and how they should operate it. 


Accelerating Business Growth Using AIOps and DevOps

The rapid evolution of AI brings forth several new potential opportunities and challenges. Today, AI drives the business growth of an enterprise in more ways than one. Artificial intelligence for IT Operations or AIOps is a new concept that encompasses big data, data mining, machine learning (ML) and AI. AIOps is a practice that blends AI with IT operations to improve operational processes. AIOps platforms automate, optimize and improve IT operations and provide users with real-time visibility and predictive alerts to minimize operational issues and proactively resolve issues that may have arisen to ensure ideal IT operations. ... Adopting AIOps helps DevOps through automation, predictive intelligence and better data-driven decisions. This collaboration fosters efficient processes, improved quality and continuous improvement to meet the ever-changing demands of the industry and customer requirements. ... AI makes it easier for DevOps teams to find patterns in data, make meaning from such data and form informed decisions on which resources and processes to allocate. The convergence of AIOps and DevOps processes can yield valuable insights that can help improve decision-making.


When is data too clean to be useful for enterprise AI?

Not cleaning your data enough causes obvious problems, but context is key. Google suggests pizza recipes with glue because that’s how food photographers make images of melted mozzarella look enticing, and that should probably be sanitized out of a generic LLM. But that’s exactly the kind of data you want to include when training an AI to give photography tips. Conversely, some of the other inappropriate advice found in Google searches might have been avoided if the origin of content from obviously satirical sites had been retained in the training set. “Data quality is extremely important, but it leads to very sequential thinking that can lead you astray,” Carlsson says. “It can end up, at best, wasting a lot of time and effort. At worst, it can go in and remove signal from your data, and actually be at cross purposes with what you need.” ... AI needs data cleaning that’s more agile, collaborative, iterative and customized for how data is being used, adds Carlsson. “The great thing is we’re using data in lots of different ways we didn’t before,” he says. “But the challenge is now you need to think about cleanliness in every one of those different ways in which you use the data.” Sometimes that’ll mean doing more work on cleaning, and sometimes it’ll mean doing less.


Architectural Intelligence – The Next AI

The vast majority of software has deterministic outcomes. If this, then that. This allows us to write unit tests and have functional requirements. If the software does something unexpected, we file a bug and rewrite the software until it does what we expect. However, we should consider AI to be non-deterministic. That doesn’t mean random, but there is an amount of unpredictability built in, and that’s by design. The feature, not a bug, is that the LLM will predict the most likely next word. "Most likely" does not mean "always guaranteed". For those of us who are used to dealing with software being predictable, this can seem like a significant drawback. However, there are two things to consider. First, GenAI, while not 100% accurate, is usually good enough. ... When considering AI components in your system design, consider where you are okay with "good enough" answers. I realize we’ve spent decades building software that does what it’s expected to do, so this may be a complex idea to think about. As a thought exercise, replace a proposed AI component with a human. How would you design your system to handle incorrect human input? Anything from UI validation to requiring a second person’s review. What if the User in User Interface is an AI? 


The Impact of Advanced Data Lineage on Governance

Advanced data lineage (ADL) provides a powerful set of tools for understanding data’s history. It is proactive and preventative, addressing data issues at that moment or before they happen. Advanced data lineage represents a significant evolution where historically, traditional data lineage tracks data movement and transformations linearly. Consequently, organizations often receive static reports that quickly become outdated in fast-changing data environments. ... As ADL transforms how organizations understand and manage their data, it requires a corresponding evolution in data governance practices. This transformation requires more than selecting the right software; it applies an adaptive framework that supports efficient assessments and actions on lineage information. An adaptive Data Governance framework is flexible enough to respond quickly to new insights provided by ADL, while still maintaining a structured approach to data management. With this shift comes increased and frequent interactions between adaptive DG teams and other departments to resolve issues. To do this well, a framework should clearly define roles, responsibilities, and escalation paths when addressing issues identified by ADL. This approach is agile while maintaining a solid methodological foundation.


Navigating AI Regulations: Key Insights and Impacts for Businesses

The historical risks associated with AI highlight the need for careful consideration and proactive management as these technologies continue to evolve. Addressing these challenges requires collaboration among technologists, policymakers, ethicists, and society at large to ensure that the development and deployment of AI provides positive contributions to society while also minimizing potential harms. AI systems raise significant data privacy concerns because they collect and process vast amounts of personal data. Regulatory frameworks establish guidelines for data protection. These ensure an individuals’ information is handled secretly, responsibly, and with their full consent. AI systems must be understandable, fair, incorporate human judgment, and be ethical. Trustworthy AI systems should perform reliably across various conditions and be resilient to errors or attacks. Developers must comply with privacy laws and safeguard personal data used in training AI models. This includes obtaining user consent for data usage and implementing strong security measures to protect sensitive information.
 


Quote for the day:

"Small daily imporevement over time lead to stunning results." -- Robin Sherman

Daily Tech Digest - July 11, 2024

Will AI Ever Pay Off? Those Footing the Bill Are Worrying Already

Though there is some nervousness around how long soaring demand can last, no one doubts the business models for those at the foundations of the AI stack. Companies need the chips and manufacturing they, and they alone, offer. Other winners are the cloud companies that provide data centers. But further up the ecosystem, the questions become more interesting. That’s where the likes of OpenAI, Anthropic and many other burgeoning AI startups are engaged in the much harder job of finding business or consumer uses for this new technology, which has gained a reputation for being unreliable and erratic. Even if these flaws can be ironed out (more on that in a moment), there is growing worry about a perennial mismatch between the cost of creating and running AI and what people are prepared to pay to use it. ... Another big red flag, economist Daron Acemoglu warns, lies in the shared thesis that by crunching more data and engaging more computing power, generative AI tools will become more intelligent and more accurate, fulfilling their potential as predicted. His comments were shared in a recent Goldman Sachs report titled “Gen AI: Too Much Spend, Too Little Benefit?”


How top IT leaders create first-mover advantage

“Some of the less talked about aspects of a high-performing team are the human traits: trust, respect, genuine enjoyment of each other,” Sample says. “I’m looking at experience and skills, but I’m also thinking about how the person will function collaboratively with the team. Do I believe they’ll have the best interest of the team at heart? Can the team trust their competency? Sample also says he focuses on “will over skill.” “Qualities like curiosity and craftsmanship are sustainable, flexible skills that can evolve with whatever the new ‘toy’ in technology is,” he says. “If you’re approaching work with that bounty of curiosity and that willing mindset, the skills can adapt.” ... Steadiness and calm from the leader create the kind of culture where people are encouraged to take risks and work together to solve big problems and execute on bold agendas. That, ultimately, is what enables a technology organization to capitalize on innovative technologies. In fact, reflecting on his legacy as a CIO, Sample believes it’s not really about the technology; it’s about the people. His success, he says, has been in building the teams that operate the technology.


Can AI be Meaningfully Regulated, or is Regulation a Deceitful Fudge?

The patchwork approach is used by federal agencies in the US. Different agencies have responsibility for different verticals and can therefore introduce regulations more relevant to specific organizations. For example, the FCC regulates interstate and international communications, the SEC regulates capital markets and protects investors, and the FTC protects consumers and promotes competition. ... The danger is that the EU’s recent monolithic AI Act will go the same way as GDPR. Kolochenko prefers the US model. He believes the smaller, more agile method of targeted regulations used by US federal agencies can provide better outcomes than the unwieldy and largely static monolithic approach adopted by the EU. ... To regulate or not to regulate is a rhetorical question – of course AI must be regulated to minimize current and future harms. The real questions are whether it will be successful (no, it will not), partially successful (perhaps, but only so far as the curate’s egg is good), and will it introduce new problems for AI-using businesses (from empirical and historical evidence, yes).


The Team Sport of Cloud Security: Breaking Down the Rules of the Game

Cloud security today is too complicated to fall on the shoulders of one person or party. For this reason, most cloud services operate on a shared responsibility model that divvies security roles between the CSP and the customer. Large players in this space, such as AWS and Microsoft Azure, have even published frameworks to the lines of liability in the sand. While the exact delineations can change depending on the service model ... However, while the expectations laid out in shared responsibility models are designed to reduce confusion, customers often struggle to conceptualize what this framework looks like in practice. And unfortunately, when there’s a lack of clarity, there’s a window of opportunity for threat actors. ... The best-case scenario for mitigating cloud security risks is when CSPs and customers are transparent and aligned on their responsibilities right from the beginning. Even the most secure cloud services aren’t foolproof, so customers need to be aware of what security elements they’re “owning” versus what falls in the court of their CSP. 


AI's new frontier: bringing intelligence to the data source

There has been a shift with organisations exploring how to bring AI to their data rather than uploading proprietary data to AI providers. This shift reflects a growing concern for data privacy and the desire to maintain control over proprietary information. Business leaders believe they can better manage security and privacy while still benefiting from AI advancements by keeping data in-house. Bringing AI solutions directly to an organisation’s data eliminates the need to move vast amounts of data, reducing security risks and maintaining data integrity. Crucially, organisations can maintain strict control over their data by implementing AI solutions within their own infrastructure to ensure that sensitive information remains protected and complies with privacy regulations. Additionally, keeping data in-house minimises the risks associated with data breaches and unauthorised access from third parties, providing peace of mind for both the organisation and its clients. Advanced AI-driven data management tools deliver this solution to businesses, automating data cleaning, validation, and transformation processes to ensure high-quality data for AI training.


How AI helps decode cybercriminal strategies

The biggest use case for AI is its ability to process, analyze, and interpret natural language communication efficiently. AI algorithms can quickly identify patterns, correlations, and anomalies within massive datasets, providing cybersecurity professionals with actionable insights. This capability not only enhances the speed and accuracy of threat detection but also enables a more proactive and comprehensive approach to securing organizations against dark web-originated threats. This is vital in an environment where the difference between detecting a threat early in the cyber kill chain vs once the attacker has achieved their objective can be hundreds of thousands of dollars. ... Another potential use case of AI is in quickly identifying and alerting specific threats relating to an organization, helping with the prioritization of intelligence. One thing an AI could look for in data is intention – to assess whether an actor is planning an attack, is asking for advice, is looking to buy or to sell access or tooling. Each of these indicates a different level of risk for the organization, which can inform security operations.


Widely Used RADIUS Authentication Flaw Enables MITM Attacks

The attack scenario - researchers say a "a well-resourced attacker" could make it practical - fools the Remote Authentication Dial-In User Service into granting access to a malicious user without the attacker having to know or guess a login password. Despite its 1990s heritage and reliance on the MD5 hashing algorithm, many large enterprises still use the RADIUS protocol for authentication to the VPN or Wi-Fi network. It's also "universally supported as an access control method for routers, switches and other network infrastructure," researchers said in a paper published Tuesday. The protocol is used to safeguard industrial control systems and 5G cellular networks. ... For the attack to succeed, the hacker must calculate a MD5 collision within the client session timeout, where the common defaults are either 30 seconds or 60 seconds. The 60-second default is typically for users that have enabled multifactor authentication. That's too fast for the researchers, who were able to reduce the compute time down to minutes from hours, but not down to seconds. An attacker working with better hardware or cloud computing resources might do better, they said.


Can RAG solve generative AI’s problems?

Currently, RAG offers probably the most effective way to enrich LLMs with novel and domain-specific data. This challenge is particularly important for such systems as chatbots, since the information they generate must be up to date. However, RAG cannot reason iteratively, which means it is still dependent on the underlying dataset (knowledge base, in RAG’s case). Even though this dataset is dynamically updated, if the information there isn’t coherent or is poorly categorized and labeled, the RAG model won’t be able to understand that the retrieval data is irrelevant, incomplete, or erroneous. It would also be naive to expect RAG to solve the AI hallucination problem. Generative AI algorithms are statistical black boxes, meaning that developers do not always know why the model hallucinates and whether it is caused by insufficient or conflicting data. Moreover, dynamic data retrieval from external sources does not guarantee there are no inherent biases or disinformation in this data. ... Therefore, RAG is in no way a definitive solution. In the case of sensitive industries, such as healthcare, law enforcement, or finance, fine-tuning LLMs with thoroughly cleaned, domain-specific datasets might be a more reliable option.


Navigating the New Data Norms with Ethical Guardrails for Ethical AI

To convert ethical principles into a practical roadmap, businesses need a clear framework aligned with industry standards and company values. Also, beyond integrity and fairness, businesses must demonstrate tangible ROI by focusing on metrics like customer acquisition cost, lifetime value, and employee engagement. Operationalizing ethical guardrails involves creating a structured approach to ensure AI deployment aligns with ethical standards. Companies can start by fostering a culture of ethics through comprehensive employee education programs that emphasize the importance of fairness, transparency, and accountability. Establishing clear policies and guidelines is crucial, alongside implementing robust risk assessment frameworks to identify and mitigate potential ethical issues. Regular audits and continuous monitoring should be part of the process to ensure adherence to these standards. Additionally, maintaining transparency for end-users by openly sharing how AI systems make decisions, and providing mechanisms for feedback, further strengthens trust and accountability.
 

How CIOs Should Approach DevOps

CIOs should have a vision for scaling DevOps across the enterprise for unlocking its full range of benefits. A collaborative culture, automation, and technical skills are all necessary for achieving scale. Besides these, the CIO needs to think about the right team structure, security landscape, and technical tools that will take DevOps safely from pilot to production to enterprise scale. It is recommended to start small: dedicate a small platform team focused only on building a platform that enables automation of various development tasks. Build the platform in small steps, incrementally and iteratively. Put together another small team with all the skills required to deliver value to customers. Constantly gather customer feedback and incorporate it to improve development at every stage. Ultimately, customer satisfaction is what matters the most in any DevOps program. Security needs to be part of every DevOps process right from the start. When a process is automated, so should its security and compliance aspects. Frequent code reviews and building awareness among all the concerned teams will help to create secure, resilient applications that can be scaled with confidence.



Quote for the day:

“There is no failure except in no longer trying.” -- Chris Bradford

Daily Tech Digest - June 08, 2024

Understanding Security's New Blind Spot: Shadow Engineering

Shadow engineering leaves security teams with little or no control over LCNC apps that citizen developers can deploy. These apps also bypass the usual code tests designed to flag software vulnerabilities and misconfigurations, which could lead to a breach. This lack of visibility prevents organizations from enforcing policies to keep them in compliance with corporate or industry security standards. ... LCNC apps have many of the same problems found in conventionally developed software, such as hard-coded or default passwords and leaky data. A simple application asking employees for their T-shirt size for a company event could give hackers access to their HR files and protected data. LCNC apps should routinely be evaluated for threats and vulnerabilities, so they can be detected and remediated. ... Give citizen developers guidance in easy-to understand terms to help them remediate risks themselves as quickly and easily as possible. Collaborate with business developers to ensure that security is integrated into the development process of LCNC applications going forward.


‘Technology must augment humanity’: An interview with former IBM CEO Ginni Rometty

While we can't control disruptions, we can control our outlook on the future. Leaders must instill confidence in their teams, emphasising the inevitability of change and the collective ability to find positive solutions. Honesty is a form of optimism, so be honest with yourself and your teams about the issues at hand, resisting attempts to ignore or minimise them. ... Problem-solving is at the core of leadership, so leaders should be unafraid to ask questions, seek insights from others, and involve their teams and wider network in finding solutions. Remember, you do not have to tackle everything alone or have all the answers. When I face a complex problem, I dissect it into manageable pieces and think through each disparate part. ... The right relationships in your life, personal and professional, provide perspective and ideas which is essential for progress. Building a robust network—from friends and family to colleagues and industry peers—provides support and inspiration to maintain optimism and courage amid disruption. The more diverse your network, the more people you can call on to fuel your optimism and courage in the face of disruption.


How Cybersecurity and Sustainability Intersect

Cybersecurity and sustainability are discrete functions in many enterprises, yet they could benefit greatly from being de-siloed. Sustainability and cybersecurity initiatives need C-suite awareness and resources to permeate an enterprise’s culture and actually achieve their goals. “It's not a one-person show anymore. It's really an ownership in that responsibility and a stewardship that cuts across functional leadership across … the entire organization,” says Lynch. In more mature organizations, cybersecurity already has board-level involvement, which can make it easier to see and act on its intersection with sustainability. But for many organizations, cybersecurity and sustainability are separate and even back-office functions. “The cybersecurity leader should not wait for someone to come [and] invite them into these conversations,” says Govindankutty. The stakeholders who need to be involved in cybersecurity and sustainability extend beyond an enterprise’s four walls. Third-party vendors are a vital part of an enterprise’s ecosystem.


Flipping The Script On Startup Success

The first step is to identify the narrowly defined vertical market segments that the company will focus on. The second step is to find a lighthouse customer or two to focus all the team’s attention on to define the minimum viable product (MVP). That is iterative as the customer and the product team go back and forth with features that are must-haves. Then the startup team tests that candidate MVP with a few other customers. ... If you ask any experienced entrepreneur, investor or board member what the most important thing a startup CEO must stay on top of is, it’s to know at all times how much cash they have, what the monthly burn rate is and how long the runway is before cash runs out. Many mistakes are excusable and recoverable, but running out of cash by surprise is neither. ... Culture is not pizza and beer on Fridays, foosball tables or little rooms filled with toys. It is about the values of the company and how they are espoused. It is about the tone the CEO sets and how they communicate with all of their constituents. And the importance of culture is not not just about company morale, although that is very important. It is about attracting and retaining the best talent. While it might be nice to think you can put this off while focusing on the first four things, you would be wrong.


Empowering Developers to Harness Sensor Data for Advanced Analytics

Data from sensors offers a treasure trove of insights from the physical world for data scientists. From tracking temperature fluctuations in a greenhouse to analyzing the vibrations of industrial machines in a manufacturing plant, these tiny devices capture crucial information that can be used for groundbreaking research and development. The journey from collecting raw sensor data to actionable analysis can be riddled with stumbling blocks, as the realities of hardware components and environmental conditions come into play. The typical approach to sensor data capture often involves a cumbersome workflow across the various teams involved, including data scientists and engineers. While data scientists meticulously define sensor requirements and prepare their notebooks to process the information, engineers deal with the complexities of hardware deployment and software updates that reduce the scientists’ ability to quickly adjust these variables on the fly. This creates a long feedback loop that delays the pace of innovation across the organization.


To lead a technology team, immerse yourself in the business first

When asked to rank the defining characteristics of a leading CIO, respondents were split between the conventional and contemporary, saying the traditional, more IT-centric qualities are just as important as the strategic and more customer-focused ones. While aligning tech vision and strategy with the business has been the role of CIOs and technology leaders for some time, the scope of their duties now extends deeper into the business itself. "Establishing and managing a tech vision isn't enough," said DiLorenzo. "Today's CIOs need to own all the various technology uses across their organizations and ensure they're actively coordinating and orchestrating their fellow tech leaders -- as well as their business peers -- to co-create a vision and tech strategy that aligns with, and furthers, the overall enterprise strategy." Getting to a leadership position also requires immersing oneself in the business, Shaikh advised. "Business acumen, which includes understanding various business functions and industry dynamics, can be cultivated by spending time in business units," she said. "This understanding is crucial for strategic thinking, to help identify opportunities where technology can impact goals."


The unseen gen AI revolution on the AI PC and the edge

The shift towards edge and PC-based AI is not without its challenges. Privacy and security concerns are paramount, as devices become more autonomous and capable of processing sensitive data. Companies must focus on privacy and AI ethics to be the cornerstone of their approach, ensuring that as AI becomes more integrated into our devices, it does so in a manner that respects user privacy and trust. Moreover, the energy efficiency of AI workloads is a critical consideration, especially for battery-powered devices. Advancements in low-power, high-performance processors are pivotal in addressing this challenge, ensuring that the benefits of gen AI are not offset by decreased device longevity or increased environmental impact. Intel’s OpenVINO toolkit further enhances these benefits by optimizing deep learning models for fast, efficient performance across Intel’s hardware portfolio. This optimization enables customers to deploy AI applications more widely, even in resource-constrained environments, without sacrificing performance. As we enter this new era, the way we think about gen AI and how we engage with it will continue to change. 


Enhancing Cloud Security in Response to Growing Digital Threats

Security challenges are unique to hybrid cloud environments where public clouds combine with on-premises infrastructure. Secure migration tools and techniques are vital to prevent data leaks or unauthorized access. Encrypt data before transferring and place controls on both ends during migration to reduce associated risks. Network segmentation in hybrid cloud environments requires thorough interconnectivity planning. Carefully configure firewall connections, firewalls, and network access controls to ensure only authorized traffic flows between on-premises resources and those hosted within the cloud. Visibility across hybrid cloud environments requires centralized monitoring to enhance threat detection capability. SIEM solutions can collect security logs from both on-premises and cloud systems, helping provide a unified view of an enterprise’s security posture. The more organizations embrace cloud computing, the more preparation for emerging trends is required. Zero-trust security models, which allow continuous authentication and authorization regardless of the device or location, are increasingly popular.


Ethical Issues in Information Technology (IT)

Establishing ethical IT practices is also important because people’s trust in the tech industry chips away each time they learn about unethical practices, especially in the wake of reports on data usage by companies such as Facebook and Google. “If companies don’t have ethical IT practices in place, they’re going to lose the trust of their customers and clients,” says Ferebee. “IT professionals need to take it seriously. They also need to let the public know they take it seriously so the public feels safe using their products and services.” Whether or not you’re in a leadership position, it is important to lead by example when it comes to ethics in IT. “People are often afraid to speak up because they’re concerned with the repercussions,” says Ferebee. “But when it comes to ethics in IT, you need to speak up — lead by example, advocate for it, and talk about it all the time. That could include reporting ethical issues, sourcing or creating and then implementing ethics training, and developing internal frameworks for your IT department. You don’t have to be the director of IT to start implementing this.”


Establishing Trust in AI Systems: 5 Best Practices for Better Governance

Security culture drives both behaviors and beliefs. A security-first organization promotes information sharing, transparency, and collaboration. When risks are discovered, or when issues occur, communication should be immediate and designed to clearly convey to employees how their behaviors and actions can both support and detract from security efforts. Enlist employees in these efforts by ensuring that your culture is positive and supportive. ... Security culture does not exist in a vacuum and does not evolve in a silo. Input from a wide range of stakeholders—from employees to customers and partners, regulators and the board—is critical for ensuring that you understand how AI is enabling efficiencies, and where risks may be emerging. ... By seeking input from key constituents in an open and transparent manner, they will be more likely to share their concerns and help uncover potential risks while there’s still time to adequately address those risks. Acknowledge and respond to feedback promptly and highlight the positive impacts of that feedback.Tackling third-party risks



Quote for the day:

"Don't wait for the perfect moment take the moment and make it perfect." -- Aryn Kyle

Daily Tech Digest - May 15, 2024

Why Capability-Based IT Investments Planning Doesn’t Work for Enterprises Today

Capability-based Planning has been around for long in the world of Enterprise Architecture (EA), and often finds a mention in leading EA frameworks. At its core is the concept of “business capability” (or simply “capability”) which represents the “what” that the business does. This is different from the “how” of the business, which is represented by constructs such as business processes, value streams, and value chains. ... Capability-based IT planning approaches are typically linear and spread over years. They do not consider the real and dynamic nature of the enterprises of today, wherein new themes such as Product Management, Agile enterprise, and AI-led business disruption require continuous introspection and adaptation to evolving industry practices and customer preferences. ... The product roadmap provides prioritised inputs for the landscape to respond to. The good thing here is that such roadmaps typically have clarity up to a few quarters ahead (up to 1- 2 years generally), with the initial quarters being more concrete and stable as opposed to the later quarters. When combined with EA-driven landscape impact analysis, the resulting IT initiatives are much more aligned to the dynamics of the business.


Evolving Roles: Developers and AI in Coding

The increasing use of AI in software development is causing a paradigm shift in the jobs of developers. Developers are evolving from being merely code writers to orchestrators of technology, strategists, and leaders of innovation. This calls for adjusting to new roles that prioritize higher-level decision-making, problem characterization, and system design. One of the changes involves that the developers need to be skilled in incorporating and tailoring AI tools into their workflow. This entails knowing the possibilities and limitations of these instruments in addition to being able to use them. Developers can devote their time to more complicated and valuable operations by becoming proficient with these technologies and freeing up time from repetitive jobs. As AI assumes greater responsibility for the technical coding process, soft skills like project management, communication, and creative problem-solving become more crucial. Developers need to be multidisciplinary collaborators, proficient communicators with non-technical team members, and project managers of both people and technology.


Why is embedded insurance so popular right now?

“Consumers get good value with embedded insurance for two main reasons. The first is trust. Customers want to buy insurance products from their trusted brands, not financial services and insurance organisations. Through embedded solutions, customers can stick to shopping with and purchasing from the brands they love and trust. There is also no need to head to a physical outlet to buy insurance – customers get protection at the exact point of sale and the service or product will be covered instantly. There is a lot of value in this ease and simplicity. Embedded solutions do a lot of the hard work and it means safeguarding what you care about is no more complicated than ticking a box on purchase. The second reason is data. Embedded insurance utilises customer data to provide bespoke costs and policies. Thanks to technology such as open banking APIs (which facilitate the data transfer between entities), tech players can assess the preferences of users, their needs and financial behaviour. Embedded insurance platforms can therefore make informed decisions and provide diverse and tailored offerings to consumers based on their risk profiles. 


Understanding the Modern Data Stack

The architecture of a modern data stack is meticulously designed to ensure utmost flexibility and seamless integration, thereby revolutionizing the workflow for businesses. The hallmark of such an advanced system lies in its ability to adapt to the evolving demands of data processing and analysis. This flexibility is not just limited to handling diverse data types but also extends to its capability to integrate with a myriad of tools and platforms. Integration plays a pivotal role in enhancing this ecosystem, acting as the glue that binds all components of the data stack together. It ensures that data flows smoothly from one process to another without bottlenecks, enabling real-time analytics and insights. This interconnectedness allows for a holistic view of operations, making it easier for businesses to make informed decisions quickly. ... Ensuring Data Quality and security while maintaining cross-platform compatibility forms a cornerstone of the modern data stack. This holistic approach integrates various components, from databases and analytics tools to data integration and visualization platforms, ensuring seamless interoperability across different environments. 


Private cloud makes its comeback, thanks to AI

Private cloud providers may be among the key beneficiaries of today’s generative AI gold rush as, once seemingly passé in favor of public cloud, CIOs are giving private clouds — either on-premises or hosted by a partner — a second look. At the center of this shift is increasing acknowledgement that to support AI workloads and to contain costs, enterprises long-term will land on a hybrid mix of public and private cloud. ... Todd Scott, senior vice president for Kyndryl US, acknowledges that AI and cost are among the key factors driving enterprises toward private clouds. “Most enterprises are currently exploring AI on the public cloud, but we expect clients will ultimately bring the app to their data and run AI where the data is, in private environments and at the edge,” he says. “Another factor that’s driving a move back to private cloud is predictability of cost,” Scott says. “Agile enterprises, by definition, make frequent changes to their applications, so they sometimes see big fluctuations in the cost of having their data on public clouds. Private clouds provide more predictability because the infrastructure is dedicated.”


CISOs Reconsider Their Roles in Response to GenAI Integration

The rise of AI and generative AI tools is a double-edged sword. “On one hand, it’s increasing their organizations’ threat exposure because cybercriminals can now use generative AI tools to rapidly scale their attacks,” said Mike Britton, CISO of Abnormal Security. “On the other hand, CISOs also have a valuable opportunity to leverage AI in strengthening their defenses.” GenAI can help enhance security content creation, security testing and analytics, incident response, and forensics. AI and machine learning can play a role in that, Britton pointed out, by ingesting signals from across the email and SaaS environment and deeply understanding normal behavior across this ecosystem. “AI models can then be used to detect anomalous activity and understand when a message or an event may be malicious,” Britton said. “This can help security teams detect more attacks at a faster speed, ensuring that threat actors never successfully reach their targets.” Jose Seara, CEO and founder of DeNexus, pointed out that modern cybersecurity solutions are already AI-enabled and take advantage of AI’s data processing power to make sense of a large volume of cybersecurity signals. 


How Adobe manages AI ethics concerns while fostering creativity

At Adobe, ethical innovation is our commitment to developing AI technologies in a responsible way that respects our customers and communities and aligns with our values. Back in 2019, we established a set of AI Ethics Principles we hold ourselves to when we're innovating, including accountability, responsibility, and transparency. With the development of Firefly, our focus has been on leveraging these principles to help mitigate biases, respond to issues quickly, and incorporate customer feedback. Our ongoing efforts help ensure that we are implementing Firefly responsibly without slowing down innovation. ... Even before Adobe began work on Firefly, our Ethical Innovation team had leveraged our AI Ethics Principles to create a standardized review process for our AI products and features -- from design to development to deployment. For any product development at Adobe, my team first works with the product team to assess potential risks, evaluate mitigations, and demonstrate how our AI Ethics Principles are being applied. It is not done in isolation.


Why Tokens Are Like Gold for Opportunistic Threat Actors

Once a threat actor has a token, they also have whatever rights and authorizations are imbued to the user. If they have captured an IdP token, they can access all corporate applications' SSO capabilities integrated with the IdP — without an MFA challenge. If it is an admin-level credential with associated privileges, they can potentially wage a world of devastation against systems, data, and backups. The longer the token is active, the more they can access, steal, and damage. Further, they can then create new accounts that no longer require the use of the token for ongoing network access. While expiring session tokens more frequently will not stop these sorts of attacks, it will greatly minimize the risk footprint by shortening the window of opportunity for a token to function. Unfortunately, we often see that tokens are not being expired at regular intervals, and some breach reporting also suggests that default token expirations are being deliberately extended. ... Longer token expiries provide user convenience — but at a high security price.


Low-tech tactics still top the IT security risk chart

Low-tech attack vectors are being adapted by cyber criminals to overcome security defenses because they can often evade detection until it’s too late. ... Hyatt’s team recently identified a rogue USB drive used to install the Raspberry Robin malware, which acts as a launchpad for subsequent attacks and gives bad actors the ability to fulfil the three key elements of a successful attack — establish a presence, maintain access and enable lateral movement. ... Even commonplace tasks such as generating a QR code to configure the Microsoft Authenticator app that’s used for two-factor authentication with Office 365 is open to exploitation, because it normalizes QR codes as a secure mechanism in the minds of users, Heiland says. “People have been trained not to click on links, but not when it comes to using QR codes for authentication,” Helland tells CSO. The danger with a QR code is that it can be configured to launch almost any application on a device, download a file, or open a browser and go to a website, all without the user being aware of what it’s going to do.


Cyber Insurers Pledge to Help Reduce Ransom Payments

As ransomware continues to pummel Britain, the government's cybersecurity agency and three major insurance associations have pledged to offer better support and guidance to victims. ... "Ransomware continues to be the biggest day-to-day cybersecurity threat to most U.K. organizations," Oswald said in a keynote speech. "In recent months, law enforcement has dramatically reduced the global threat from ransomware by disrupting LockBit's activities and just last week unmasking and sanctioning one of its Russia-based leaders." Nevertheless, officials continue to urge organizations to hone their defenses and constantly keep improving their resilience capabilities, to better repel hack attacks and avoid ever having to even consider paying a ransom. "The NCSC does not encourage, endorse or condone paying ransoms, and it's a dangerous misconception that doing so will make an incident go away or free victims of any future headaches," Oswald said. "In fact, every ransom that is paid signals to criminals that these attacks bear fruit and are worth doing."



Quote for the day:

''The distance between insanity and genius is measured only by success.'' -- Bruce Feirstein