Showing posts with label CFO. Show all posts
Showing posts with label CFO. Show all posts

Daily Tech Digest - May 05, 2025


Quote for the day:

"Listening to the inner voice and trusting the inner voice is one of the most important lessons of leadership." -- Warren Bennis


How CISOs can talk cybersecurity so it makes sense to executives

“With complex technical topics and evolving threats to cover, the typical brief time slot often proves inadequate for meaningful dialogue. Security leaders can address this by preparing concise, business-focused briefing materials in advance and prioritizing the most critical issues for discussion. When time constraints persist, they should advocate for dedicated sessions to ensure proper oversight of cybersecurity matters,” said Ross Young ... When communicating with the board of directors, Turgal advises mapping cybersecurity initiatives to shareholder value. “If the business goal is to protect shareholder value, there is a direct connection to business continuity and increased operational uptime.” To support that, security leaders might increase cyber resilience through containerized immutable backups, disaster recovery and incident response plans—tools that can mitigate brand-damaging attacks and prevent stock price volatility. ... Some of the most productive conversations don’t happen in meetings. They happen over coffee, or on calls with individual board members.​ If possible, schedule one-on-ones with directors to walk them through key risks. Ask what they want to know more about. Find out how they prefer to receive information.​ By building rapport outside the meeting, you’ll face fewer surprises inside it. Your strongest allies in the boardroom are often the CFO and legal chief. 


The great cognitive migration: How AI is reshaping human purpose, work and meaning

Human purpose and meaning are likely to undergo significant upheaval. For centuries, we have defined ourselves by our ability to think, reason and create. Now, as machines take on more of those functions, the questions of our place and value become unavoidable. If AI-driven job losses occur on a large scale without a commensurate ability for people to find new forms of meaningful work, the psychological and social consequences could be profound. It is possible that some cognitive migrants could slip into despair. AI scientist Geoffrey Hinton, who won the 2024 Nobel Prize in physics for his groundbreaking work on deep learning neural networks that underpin LLMs, has warned in recent years about the potential harm that could come from AI. In an interview with CBS, he was asked if he despairs about the future. He said he did not because, ironically, he found it very hard to take [AI] seriously. He said: “It’s very hard to get your head around the point that we are at this very special point in history where in a relatively short time, everything might totally change. A change on a scale we’ve never seen before. It’s hard to absorb that emotionally.” There will be paths forward. Some researchers and economists, including MIT economist David Autor, have begun to explore how AI could eventually help rebuild middle-class jobs, not by replacing human workers, but by expanding what humans can do. 


CISO vs CFO: why are the conversations difficult?

The disconnect between CISOs and CFOs remains a challenge in many organizations. While cybersecurity threats escalate in scale and complexity, senior leadership often fails to fully grasp the magnitude of the risk. This gap is visible in EY’s 2025 Cybersecurity study, which shows that 68% of CISOs worry that senior leaders underestimate the risks. Progress in bridging this divide happens when CISOs and CFOs are willing to meet halfway, aligning technical priorities with financial realities. Argyle realized that to move the conversation forward, he had to change his approach: he stopped defending the technology and started showing the impact. ... Redesigning the relationship between a CISO and a CFO isn’t something that’s fixed over a single meeting or a strong cup of coffee. It takes time, mutual understanding, and open conversations. As Argyle points out, these discussions shouldn’t be limited to budget season, when both sides are already in negotiation mode. To truly build trust and alignment, CISOs and CFOs need to keep the dialogue alive year-round and make efforts to understand each other’s work, long before money is involved. “Ideally, I’d bring the CFO into tabletop cyber crisis simulations and scenario planning,” he adds. “Let them see the domino effect of a breach — not just read about it in a report. That firsthand exposure builds understanding faster than any PowerPoint.”


How to Build a Team That Thinks and Executes Like a Founder

If your team has a deep understanding of what you are trying to accomplish, you can ensure that everyone is rowing in the same direction. It isn't enough to simply share your vision and goals. To really get the team engaged, it's critical that they understand the underlying "why" behind your goals and decisions. One of the best ways to do this is by being as transparent as possible, such as sharing financial data and other key business metrics. This information can help the team understand the bigger picture and connect how their individual roles contribute to the overall success of the company. ... First, stop assigning tasks to your team. Instead, give team members ownership over entire end-to-end processes. This allows them to take full responsibility for the success of this process and help you hold the team accountable for executing it successfully. The best way to do this is by focusing on outcome-based delegation. This provides flexibility and autonomy for the team to figure out the best way to achieve the goal. As a business owner, you don't want the team coming to you for every little decision. ... n many cases, a bad deliverable is a result of miscommunication, unclear direction or not having access to the right resources. The challenge is that many business owners give up when delegation doesn't work the way they hoped the first time. 


Quiet hiring: How HR can turn this trend into a winning strategy

At its heart, quiet hiring is about strategic talent management. It’s a way for organisations to fill skill gaps and meet changing business needs without expanding their workforce in the traditional sense. Instead of hiring full-time employees, businesses tap into existing employees, freelancers, or contractors to temporarily shift roles or tackle specific projects. It’s about working smarter with the talent you already have, and supplementing that with external experts when needed. ... Instead of looking outside the organisation to fill a gap, businesses can move current employees into new roles or give them additional responsibilities. For instance, if a marketing expert has experience with analytics, they might temporarily shift to the data analytics team to support a busy period. Not only does this save the company time and money in recruitment, but it also develops your current team, gives employees fresh opportunities, and fosters an agile workforce. It’s a win-win—employees gain new skills, and organisations can fill critical gaps without the lengthy hiring process. ... The business world is unpredictable, and the ability to adapt quickly is more important than ever. Quiet hiring offers companies the flexibility they need to respond to sudden changes. For example, if demand for a product surges unexpectedly, internal employees can be quickly moved to meet the increased workload, while contractors can be brought in to handle the temporary increase in tasks.


Attack of the AI crawlers

To be fair, it’s not entirely clear that robots.txt directives are legally enforceable, according to Susskind and other attorneys who focus on technology issues. Therefore, if the model makers were arguing that they have the right to violate those requests, that might be a legitimate argument. But that is not what they are arguing. They are saying they abide by those rules, but then many send out undeclared crawlers to do it anyway. The real problem is that they are inflicting financial damage to the site owners by forcing them to pay far more for bandwidth. And it is solely the model makers that benefit, not the site owners. What is IT to do, Susskind asked, when an undeclared genAI crawler “hits my site a million times a day”? Indeed, Susskind’s team has seen “a single bot hitting a site millions of times per hour. That is several orders of magnitude more burdensome than normal SEO crawling.” ... The problem, according to attorneys in this space, is not with establishing monetary damages but with attribution: how to determine who’s responsible for the surging traffic. In such a hypothetical court case, the lawyers for the deep-pocketed genAI model makers would likely argue that plaintiffs’ sites are visited by millions of users and bots from multiple sources. Without proof tying traffic to a specific crawler or tying a crawler to a specific model maker, the model maker can’t be held accountable for plaintiffs’ financial damages.


A Farewell to APMs — The Future of Observability is MCP tools

Initially introduced by Anthropic, the Model Context Protocol (MCP) represents a communication tier between AI agents and other applications, allowing agents to access additional data sources and perform actions as they see fit. More importantly, MCPs open up new horizons for the agent to intelligently choose to act beyond its immediate scope and thereby broaden the range of use cases it can address. The technology is not new, but the ecosystem is. In my mind, it is the equivalent of evolving from custom mobile application development to having an app store. ... With the advent of MCPs, software developers now have the choice of adopting a different model for developing software. Instead of focusing on a specific use case, trying to nail the right UI elements for hard-coded usage patterns, applications can transform into a resource for AI-driven processes. This describes a shift from supporting a handful of predefined interactions to supporting numerous emergent use cases. ... Making observability useful to the agent, however, is a little more involved than slapping on an MCP adapter to an APM. Indeed, many of the current generation tools, in rushing to support the new technology took that very route, not taking into consideration that AI agents also have their limitations.


Knowing when to use AI coding assistants

AI performs exceptionally well with common coding patterns. Its sweet spot is generating new code with low complexity when your objectives are well-specified and you’re using popular libraries, says Swiber. “Web development, mobile development, and relatively boring back-end development are usually fairly straightforward,” adds Charity Majors, co-founder and CTO of Honeycomb. The more common the code and the more online examples, the better AI models perform. ... While AI accelerates development, it creates a new burden to review and validate the resulting code. “In a worst-case scenario, the time and effort required to debug and fix subtle issues in AI-generated code could even eclipse the time it would require to write the code from scratch,” says Sonar’s Wang. Quality and security can suffer from vague prompts or poor contextual understanding, especially in large, complex code bases. Transformer-based models also face limitations with token windows, making it harder to grasp projects with many parts or domain-specific constraints. “We’ve seen cases where AI outputs are syntactically correct but contain logical errors or subtle bugs,” Wang notes. These mistakes originate from a “black box” process, he says, making AI risky for mission-critical enterprise applications that require strict governance.


CISOs Take Note: Is Needless Cybersecurity Strangling Your Business?

"For IT and security teams, redundant and obsolete security tools or measures increase workflows, hurt efficiency, and extend incident response and patch time," he explains via email. "When there's excessive or ineffective tools in the security stack, teams waste valuable time sifting through redundant and low-value alerts, hampering them from focusing on real threats." ... Additionally, excessive security controls, such as overly intrusive multi-factor authentication, can create employee friction, slowing down and challenging collaboration with partners, vendors, and customers, Shilts says. "This often results in employees finding workarounds, such as using their personal emails, which introduces security risks that are difficult to track and manage." ... In general, an organizational security posture, including tools and procedures, should be assessed annually or even earlier if a major change is implemented, Biswas says. Ideally, to prevent conflicts of interest, such assessments should be performed by independent, expert third parties. "After all, it’s difficult for an implementor or operator to be a truly impartial assessor of their own work," he explains. "While some organizations may be able to do so via internal audit, for most it makes sense to hire an outsider to play devil’s advocate."


Machines Cannot Feel or Think, but Humans Can, and Ought To

In a philosophical debate, the question, as it is applied to AI, is: How do we know that AI does not have an experience of the world? The same question could be asked of flowers, animals, stones, and automobiles. In this sense, the question of “other intelligences” is often quite valuable and holds tremendous potential for escaping the capital-focused development of information processing machines. In its most useful form, this approach to “post-humanism” refers to the evolved understanding that humans are not the center of the universe, but exist within a dense network of relationships. This definition of the post-human may pave the way to decentering definitions of “human” that privilege human needs over those of the environment, or even people whom we consider less-than. It may cultivate a deeper appreciation for the complexity of animals and their ecosystems, and, through careful design, might lead to an approach to technological development that considers the interdependencies within systems as connected, not isolated. Have we even started to build a capacity to understand those worlds, to empathize with trees and rivers and elk, to the extent to which we can now fully shift our attention to the potential emotional experiences of a hypothetical Microsoft product? 

Daily Tech Digest - April 29, 2025


Quote for the day:

"Don't let yesterday take up too much of today." -- Will Rogers



AI and Analytics in 2025 — 6 Trends Driving the Future

As AI becomes deeply embedded in enterprise operations and agentic capabilities are unlocked, concerns around data privacy, security and governance will take center stage. With emerging technologies evolving at speed, a mindset of continuous adaptation will be required to ensure requisite data privacy, combat cyber risks and successfully achieve digital resilience. As organizations expand their global footprint, understanding the implications of evolving AI regulations across regions will be crucial. While unifying data is essential for maximizing value, ensuring compliance with diverse regulatory frameworks is mandatory. A nuanced approach to regional regulations will be key for organizations navigating this dynamic landscape. ... As the technology landscape evolves, continuous learning becomes essential. Professionals must stay updated on the latest technologies while letting go of outdated practices. Tech talent responsible for building AI systems must be upskilled in evolving AI technologies. At the same time, employees across the organization need training to collaborate effectively with AI, ensuring seamless integration and success. Whether through internal upskilling or embarking on skills-focused partnerships, investment in talent management will prove crucial to winning the tech-talent gold rush and thriving in 2025 and beyond.


Generative AI is not replacing jobs or hurting wages at all, say economists

The researchers looked at the extent to which company investment in AI has contributed to worker adoption of AI tools, and also how chatbot adoption affected workplace processes. While firm-led investment in AI boosted the adoption of AI tools — saving time for 64 to 90 percent of users across the studied occupations — chatbots had a mixed impact on work quality and satisfaction. The economists found for example that "AI chatbots have created new job tasks for 8.4 percent of workers, including some who do not use the tools themselves." In other words, AI is creating new work that cancels out some potential time savings from using AI in the first place. "One very stark example that it's close to home for me is there are a lot of teachers who now say they spend time trying to detect whether their students are using ChatGPT to cheat on their homework," explained Humlum. He also observed that a lot of workers now say they're spending time reviewing the quality of AI output or writing prompts. Humlum argues that can be spun negatively, as a subtraction from potential productivity gains, or more positively, in the sense that automation tools historically have tended to generate more demand for workers in other tasks. "These new job tasks create new demand for workers, which may boost their wages, if these are more high value added tasks," he said.


Advancing Digital Systems for Inclusive Public Services

Uganda adopted the modular open-source identity platform, MOSIP, two years ago. A small team of 12, with limited technical expertise, began adapting the MOSIP platform to align with Uganda's Registration of Persons Act, gradually building internal capacity. By the time the system integrator was brought in, Uganda incorporated digital public good, DPG, into its legal framework, providing the integrator with a foundation to build upon. This early customization helped shape the legal and technical framework needed to scale the platform. But improvements are needed, particularly in the documentation of the DPG. "Standardization, information security and inclusion were central to our work with MOSIP," Kisembo said. "Consent became a critical focus and is now embedded across the platform, raising awareness about privacy and data protection." ... Nigeria, with a population of approximately 250 million, is taking steps to coordinate its previously fragmented digital systems through a national DPI framework. The country deployed multiple digital solutions over the last 10 to 15 years, which were often developed in silos by different ministries and private sector agencies. In 2023 and 2024, Nigeria developed a strategic framework to unify these systems and guide its DPI adoption. 


Eyes, ears, and now arms: IoT is alive

In just a few years, devices at home and work started including cameras to see and microphones to hear. Now, with new lines of vacuums and emerging humanoid robots, devices have appendages to manipulate the world around them. They’re not only able to collect information about their environment but can touch, “feel”, and move it. ... But, knowing the history of smart devices getting hacked, there’s cause for concern. From compromised baby monitors to open video doorbell feeds, bad actors have exploited default passwords and unencrypted communications for years. And now, beyond seeing and hearing, we’re on the verge of letting devices roam around our homes and offices with literal arms. What’s stopping a hacked robot vacuum from tampering with security systems? Or your humanoid helper from opening the front door? ... If developers want robots to become a reality, they need to create confidence in these systems immediately. This means following best practice cybersecurity by enabling peer-to-peer connectivity, outlawing generic credentials, and supporting software throughout the device lifecycle. Likewise, users can more safely participate in the robot revolution by segmenting their home networks, implementing multi-factor authentication, and regularly reviewing device permissions.


How to Launch a Freelance Software Development Career

Finding freelance work can be challenging in many fields, but it tends to be especially difficult for software developers. One reason is that many software development projects do not lend themselves well to a freelancing model because they require a lot of ongoing communication and maintenance. This means that, to freelance successfully as a developer, you'll need to seek out gigs that are sufficiently well-defined and finite in scope that you can complete within a predictable period of time. ... Specifically, you need to envision yourself also as a project manager, a finance director, and an accountant. When you can do these things, it becomes easier not just to freelance profitably, but also to convince prospective clients that you know what you're doing and that they can trust you to complete projects with quality and on time. ... While creating a portfolio may seem obvious enough, one pitfall that new freelancers sometimes run into is being unable to share work due to nondisclosure agreements they sign with clients. When negotiating contracts, avoid this risk by ensuring that you'll retain the right to share any key aspects of a project for the purpose of promoting your own services. Even if clients won't agree to letting you share source code, they'll often at least allow you to show off the end product and discuss at a high level how you approached and completed a project.


Digital twins critical for digital transformation to fly in aerospace

Among the key conclusions were that there was a critical need to examine the standards that currently support the development of digital twins, identify gaps in the governance landscape, and establish expectations for the future. ... The net result will be that stakeholder needs and objectives become more achievable, resulting in affordable solutions that shorten test, demonstration, certification and verification, thereby decreasing lifecycle cost while increasing product performance and availability. Yet the DTC cautioned that cyber security considerations within a digital twin and across its external interfaces must be customisable to suit the environment and risk tolerance of digital twin owners. ... First, the DTC said that evidence suggests a necessity to examine the standards that currently support digital twins, identify gaps in the governance landscape, and set expectations for future standard development. In addition, the research team identified that standardisation challenges exist when developing, integrating and maintaining digital twins during design, production and sustainment. There was also a critical need to identify and manage requirements that support interoperability between digital twins throughout the lifecycle. This recommendation also applied to the more complex SoS Digital Twins development initiatives. Digital twin model calibration needs to be an automated process and should be applicable to dynamically varying model parameters.


Quality begins with planning: Building software with the right mindset

Too often, quality is seen as the responsibility of QA engineers. Developers write the code, QA tests it, and ops teams deploy it. But in high-performing teams, that model no longer works. Quality isn’t one team’s job; it’s everyone’s job. Architects defining system components, developers writing code, product managers defining features, and release managers planning deployments all contribute to delivering a reliable product. When quality is owned by the entire team, testing becomes a collaborative effort. Developers write testable code and contribute to test plans. Product managers clarify edge cases during requirements gathering. Ops engineers prepare for rollback scenarios. This collective approach ensures that no aspect of quality is left to chance. ... One of the biggest causes of software failure isn’t building the wrong way, it’s building the wrong thing. You can write perfectly clean, well-tested code that works exactly as intended and still fail your users if the feature doesn’t solve the right problem. That’s why testing must start with validating the requirements themselves. Do they align with business goals? Are they technically feasible? Have we considered the downstream impact on other systems or components? Have we defined what success looks like?


What Makes You a Unicorn in Your Industry? Start by Mastering These 4 Pillars

First, you have to have the capacity, the skill, to excel in that area. Additionally, you have to learn how to leverage that standout aspect to make it work for you in the marketplace - incorporating it into your branding, spotlighting it in your messaging, maybe even including it in your name. Concise as the notion is, there's actually a lot of breadth and flexibility in it, for when it comes to selecting what you want to do better than anyone else is doing it, your choices are boundless. ... Consumers have gotten quite savvy at sniffing out false sincerity, so when they come across the real thing, they're much more prone to give you their business. Basically, when your client base believes you prioritize your vision, your team and creating an incredible product or service over financial gain, they want to work with you. ... Building and maintaining a remarkable "company culture" can just be a buzzword to you, or you can bring it to life. I can't think of any single factor that makes my company more valuable to my clients than the value I place on my people and the experience I endeavor to provide them by working for me. When my staff feels openly recognized, wholly supported and vitally important to achieving our shared outcomes, we're truly unstoppable. So keep in mind that your unicorn focus can be internal, not necessarily client-facing.



Conquering the costs and complexity of cloud, Kubernetes, and AI

While IT leaders clearly see the value in platform teams—nine in 10 organizations have a defined platform engineering team—there’s a clear disconnect between recognizing their importance and enabling their success. This gap signals major stumbling blocks ahead that risk derailing platform team initiatives if not addressed early and strategically. For example, platform teams find themselves burdened by constant manual monitoring, limited visibility into expenses, and a lack of standardization across environments. These challenges are only amplified by the introduction of new and complex AI projects. ... Platform teams that manually juggle cost monitoring across cloud, Kubernetes, and AI initiatives find themselves stretched thin and trapped in a tactical loop of managing complex multi-cluster Kubernetes environments. This prevents them from driving strategic initiatives that could actually transform their organizations’ capabilities. These challenges reflect the overall complexity of modern cloud, Kubernetes, and AI environments. While platform teams are chartered with providing infrastructure and tools necessary to empower efficient development, many resort to short-term patchwork solutions without a cohesive strategy. 


Reporting lines: Could separating from IT help CISOs?

CFOs may be primarily concerned with the financial performance of the business, but they also play a key role in managing organizational risk. This is where CISOs can learn the tradecraft in translating technical measures into business risk management. ... “A CFO comes through the finance ranks without a lot of exposure to IT and I can see how they’re incentivized to hit targets and forecasts, rather than thinking: if I spend another two million on cyber risk mitigation, I may save 20 million in three years’ time because an incident was prevented,” says Schat. Budgeting and forecasting cycles can be a mystery to CISOs, who may engage with the CFO infrequently, and interactions are mostly transactional around budget sign-off on cybersecurity initiatives, according to Gartner. ... It’s not uncommon for CISOs to find security seen as a barrier, where the benefits aren’t always obvious, and are actually at odds with the metrics that drive the CIO. “Security might slow down a project, introduce a layer of complexity that we need from a security perspective, but it doesn’t obviously help the customer,” says Bennett. Reporting to CFOs can relieve potential conflicts of interest. It can allow CISOs to broaden their involvement across all areas of the organization, beyond input in technology, because security and managing risk is a whole-of-business mission.

Daily Tech Digest - February 01, 2025


Quote for the day:

"Leadership is a matter of having people look at you and gain confidence, seeing how you react. If you're in control, they're in control." -- Tom Laundry


5 reasons the enterprise data center will never die

Cloud repatriation — enterprises pulling applications back from the cloud to the data center — remains a popular option for a variety of reasons. According to a June 2024 IDC survey, about 80% of 2,250 IT decision-maker respondents “expected to see some level of repatriation of compute and storage resources in the next 12 months.” IDC adds that the six-month period between September 2023 and March 2024 saw increased levels of repatriation plans “across both compute and storage resources for AI lifecycle, business apps, infrastructure, and database workloads.” ... According to Forrester’s 2023 Infrastructure Cloud Survey, 79% of roughly 1,300 enterprise cloud decision-makers said their firms are implementing internal private clouds, which will use virtualization and private cloud management. Nearly a third (31%) of respondents said they are building internal private clouds using hybrid cloud management solutions such as software-defined storage and API-consistent hardware to make the private cloud more like the public cloud, Forrester adds. ... “Edge is a crucial technology infrastructure that extends and innovates on the capabilities found in core datacenters, whether enterprise- or service-provider-oriented,” says IDC. The rise of edge computing shatters the binary “cloud-or-not-cloud” way of thinking about data centers and ushers in an “everything everywhere all at once” distributed model


How to Understand and Manage Cloud Costs with a Data-Driven Strategy

Understanding your cloud spend starts with getting serious about data. If your cloud usage grew organically across teams over time, you're probably staring at a bill that feels more like a puzzle than a clear financial picture. You know you're paying too much, and you have an idea of where the spending is happening across compute, storage, and networking, but you are not sure which teams are overspending, which applications are being overprovisioned, and so on. Multicloud environments add even another layer of complexity to data visibility. ... With a holistic view of your data established, the next step is augmenting tools to gain a deeper understanding of your spending and application performance. To achieve this, consider employing a surgical approach by implementing specialized cost management and performance monitoring tools that target specific areas of your IT infrastructure. For example, granular financial analytics can help you identify and eliminate unnecessary expenses with precision. Real-time visibility tools provide immediate insights into cost anomalies and performance issues, allowing for prompt corrective actions. Governance features ensure that spending aligns with budgetary constraints and compliance requirements, while integration capabilities with existing systems facilitate seamless data consolidation and analysis across different platforms. 


Top cybersecurity priorities for CFOs

CFOs need to be aware of the rising threats of cyber extortion, says Charles Soranno, a managing director at global consulting firm Protiviti. “Cyber extortion is a form of cybercrime where attackers compromise an organization’s systems, data or networks and demand a ransom to return to normal and prevent further damage,” he says. Beyond a ransomware attack, where data is encrypted and held hostage until the ransom is paid, cyber extortion can involve other evolving threats and tactics, Soranno says. “CFOs are increasingly concerned about how these cyber extortion schemes impact lost revenue, regulatory fines [and] potential payments to bad actors,” he says. ... “In collaboration with other organizational leaders, CFOs must assess the risks posed by these external partners to identify vulnerabilities and implement a proactive mitigation and response plan to safeguard from potential threats and issues.” While a deep knowledge of the entire supply chain’s cybersecurity posture might seem like a luxury for some organizations, the increasing interconnectedness of partner relationships is making third-party cybersecurity risk profiles more of a necessity, Krull says. “The reliance on third-party vendors and cloud services has grown exponentially, increasing the potential for supply chain attacks,” says Dan Lohrmann, field CISO at digital services provider Presidio. 


GDPR authorities accused of ‘inactivity’

The idea that the GDPR has brought about a shift towards a serious approach to data protection has largely proven to be wishful thinking, according to a statement from noyb. “European data protection authorities have all the necessary means to adequately sanction GDPR violations and issue fines that would prevent similar violations in the future,” Schrems says. “Instead, they frequently drag out the negotiations for years — only to decide against the complainant’s interests all too often.” ... “Somehow it’s only data protection authorities that can’t be motivated to actually enforce the law they’re entrusted with,” criticizes Schrems. “In every other area, breaches of the law regularly result in monetary fines and sanctions.” Data protection authorities often act in the interests of companies rather than the data subjects, the activist suspects. It is precisely fines that motivate companies to comply with the law, reports the association, citing its own survey. Two-thirds of respondents stated that decisions by the data protection authority that affect their own company and involve a fine lead to greater compliance. Six out of ten respondents also admitted that even fines imposed on other organizations have an impact on their own company. 


The three tech tools that will take the heat off HR teams in 2025

As for the employee review process, a content services platform enables HR employees to customise processes, routing approvals to the right managers, department heads, and people ops. This means that employee review processes can be expedited thanks to customisable forms, with easier goal setting, identification of upskilling opportunities, and career progression. When paperwork and contracts are uniform, customisable, and easily located, employers are equipped to support their talent to progress as quickly as possible – nurturing more fulfilled employees who want to stick around. ... Naturally, a lot of HR work is form-heavy, with anything from employee onboarding and promotions to progress reviews and remote working requests requiring HR input. However, with a content services platform, HR professionals can route and approve forms quickly, speeding up the process with digital forms that allow employees to enter information quickly and accurately. Going one step further, HR leaders can leverage automated workflows to route forms to approvers as soon as an employee completes them – cutting out the HR intermediary. ... Armed with a single source of truth, HR professionals can take advantage of automated workflows, enabling efficient notifications and streamlining HR compliance processes.


AI Could Turn Against You — Unless You Fix Your Data Trust Issues

Without unified standards for data formats, definitions, and validations, organizations struggle to establish centralized control. Legacy systems, often ill-equipped to handle modern data volumes, further exacerbate the problem. These systems were designed for periodic updates rather than the continuous, real-time streams demanded by AI, leading to inefficiencies and scalability limitations. To address these challenges, organizations must implement centralized governance, quality, and observability within a single framework. This enables them to leverage data lineage and track their data as it moves through systems to ensure transparency and identify issues in real-time. It also ensures they can regularly validate data integrity to support consistent, reliable AI models by conducting real-time quality checks. ... For organizations to maximize the potential of AI, they must embed data trust into their daily operations. This involves using automated systems like data observability to validate data integrity throughout its lifecycle, integrated governance to maintain reliability, and assuring continuous validation within evolving data ecosystems. By addressing data quality challenges and investing in unified platforms, organizations can transform data trust into a strategic advantage. 


Backdoor in Chinese-made healthcare monitoring device leaks patient data

“By reviewing the firmware code, the team determined that the functionality is very unlikely to be an alternative update mechanism, exhibiting highly unusual characteristics that do not support the implementation of a traditional update feature,” CISA said in its analysis report. “For example, the function provides neither an integrity checking mechanism nor version tracking of updates. When the function is executed, files on the device are forcibly overwritten, preventing the end customer — such as a hospital — from maintaining awareness of what software is running on the device.” In addition to this hidden remote code execution behavior, CISA also found that once the CMS8000 completes its startup routine, it also connects to that same IP address over port 515, which is normally associated with the Line Printer Daemon (LPD), and starts transmitting patient information without the device owner’s knowledge. “The research team created a simulated network, created a fake patient profile, and connected a blood pressure cuff, SpO2 monitor, and ECG monitor peripherals to the patient monitor,” the agency said. “Upon startup, the patient monitor successfully connected to the simulated IP address and immediately began streaming patient data to the address.”


3 Considerations for Mutual TLS (mTLS) in Cloud Security

Traditional security approaches often rely on IP whitelisting as a primary method of access control. While this technique can provide a basic level of security, IP whitelists operate on a fundamentally flawed assumption: that IP addresses alone can accurately represent trusted entities. In reality, this approach fails to effectively model real-world attack scenarios. IP whitelisting provides no mechanism for verifying the integrity or authenticity of the connecting service. It merely grants access based on network location, ignoring crucial aspects of identity and behavior. In contrast, mTLS addresses these shortcomings by focusing on cryptographic identity(link is external) rather than network location. ... In the realm of mTLS, identity is paramount. It's not just about encrypting data in transit; it's about ensuring that both parties in a communication are exactly who they claim to be. This concept of identity in mTLS warrants careful consideration. In a traditional network, identity might be tied to an IP address or a shared secret. But, in the modern world of cloud-native applications, these concepts fall short. mTLS shifts the mindset by basing identity on cryptographic certificates. Each service possesses its own unique certificate, which serves as its identity card.


Artificial Intelligence Versus the Data Engineer

It’s worth noting that there is a misconception that AI can prepare data for AI, when the reality is that, while AI can accelerate the process, data engineers are still needed to get that data in shape before it reaches the AI processes and models and we see the cool end results. At the same time, there are AI tools that can certainly accelerate and scale the data engineering work. So AI is both causing and solving the challenge in some respects! So, how does AI change the role of the data engineer? Firstly, the role of the data engineer has always been tricky to define. We sit atop a large pile of technology, most of which we didn’t choose or build, and an even larger pile of data we didn’t create, and we have to make sense of the world. Ostensibly, we are trying to get to something scientific. ... That art comes in the form of the intuition required to sift through the data, understand the technology, and rediscover all the little real-world nuances and history that over time have turned some lovely clean data into a messy representation of the real world. The real skill great data engineers have is therefore not the SQL ability but how they apply it to the data in front of them to sniff out the anomalies, the quality issues, the missing bits and those historical mishaps that must be navigated to get to some semblance of accuracy.


How engineering teams can thrive in 2025

Adopting a "fail forward" mentality is crucial as teams experiment with AI and other emerging technologies. Engineering teams are embracing controlled experimentation and rapid iteration, learning from failures and building knowledge. ... Top engineering teams will combine emerging technologies with new ways of working. They’re not just adopting AI—they’re rethinking how software is developed and maintained as a result of it. Teams will need to stay agile to lead the way. Collaboration within the business and access to a multidisciplinary talent base is the recipe for success. Engineering teams should proactively scenario plan to manage uncertainty by adopting agile frameworks like the "5Ws" (Who, What, When, Where, and Why.) This approach allows organizations to tailor tech adoption strategies and marry regulatory compliance with innovation. Engineering teams should also actively address AI bias and ensure fair and responsible AI deployment. Many enterprises are hiring responsible AI specialists and ethicists as regulatory standards are now in force, including the EU AI Act, which impacts organizations with users in the European Union. As AI improves, the expertise and technical skills that proved valuable before need to be continually reevaluated. Organizations that successfully adopt AI and emerging tech will thrive. 


Daily Tech Digest - January 16, 2025

How DPUs Make Collaboration Between AppDev and NetOps Essential

While GPUs have gotten much of the limelight due to AI, DPUs in the cloud are having an equally profound impact on how applications are delivered and network functions are designed. The rise of DPU-as-a-Service is breaking down traditional silos between AppDev and NetOps teams, making collaboration essential to fully unlock DPU capabilities. DPUs offload network, security, and data processing tasks, transforming how applications interact with network infrastructure. AppDev teams must now design applications with these offloading capabilities in mind, identifying which tasks can benefit most from DPUs—such as real-time data encryption or intensive packet processing. ... AppDev teams must explicitly design applications to leverage DPU-accelerated encryption, while NetOps teams need to configure DPUs to handle these workloads efficiently. This intersection of concerns creates a natural collaboration point. The benefits of this collaboration extend beyond security. DPUs excel at packet processing, data compression, and storage operations. When AppDev and NetOps teams work together, they can identify opportunities to offload compute-intensive tasks to DPUs, dramatically improving application performance. 


The CFO may be the CISO’s most important business ally

“Cybersecurity is an existential threat to every company. Gone are the days where CFOs could only be fired if they ran out of money, cooked the books, or had a major controls outage,” he said. “Lack of adequate resourcing of cybersecurity is an emerging threat to their very existence.” This sentiment reflects the reality that for most organizations cyber threat is the No. 1 business risk today, and this has significant implications for the strategic survival of the enterprise. It’s time for CISOs and CFOs to address the natural barriers to their relationship and develop a strategic partnership for the good of the company. ... CISOs should be aware of a few key strategies for improving collaboration with their CFO counterparts. The first is reverse mentoring. Because CFOs and CISOs come from differing perspectives and lead domains rife with terminology and details that can be quite foreign to the other, reverse mentoring can be important for building a bridge between the two. In such a relationship, the CISO can offer insights into cybersecurity, while simultaneously learning to communicate in the CFO’s financial language. This mutual learning creates a more aligned approach to organizational risk. Second, CISOs must also develop their commercial perspective.


Establishing a Software-Based, High-Availability Failover Strategy for Disaster Mitigation and Recovery

No one should be surprised that cloud services occasionally go offline. If you think of the cloud as “someone else’s computer,” then you recognize there are servers and software behind it all. Someone else is doing their best to keep the lights on in the face of events like human error, natural disasters, and DDoS and other types of cyberattacks. Someone else is executing their disaster response and recovery plan. While the cloud may well be someone else’s computer, when there is a cloud outage that affects your operations, it is your problem. You are at the mercy of someone else to restore services so you can get back online. It doesn’t have to be that way. Cloud-dependent organizations can adopt strategies that allow them to minimize the risk someone else’s outage will knock them offline. One such strategy is to take advantage of hybrid or multi-cloud architecture to achieve operational resiliency and high availability through service redundancy through SANless clustering. Normally a storage area network (SAN) uses local storage to configure clustered nodes on-premises, in the cloud, and to a disaster recovery site. It’s a proven approach, but because it is hardware dependent, it is costly in terms of dollars and computing resources, and comes with additional management demands.


Trusted Apps Sneak a Bug Into the UEFI Boot Process

UEFI is a kind of sacred space — a bridge between firmware and operating system, allowing a machine to boot up in the first place. Any malware that invades this space will earn a dogged persistence through reboots, by reserving its own spot in the startup process. Security programs have a harder time detecting malware at such a low level of the system. Even more importantly, by loading first, UEFI malware will simply have a head start over those security checks that it aims to avoid. Malware authors take advantage of this order of operations by designing UEFI bootkits that can hook into security protocols, and undermine critical security mechanisms like UEFI Secure Boot or HVCI, Windows' technology for blocking unsigned code in the kernel. To ensure that none of this can happen, the UEFI Boot Manager verifies every boot application binary against two lists: "db," which includes all signed and trusted programs, and "dbx," including all forbidden programs. But when a vulnerable binary is signed by Microsoft, the matter is moot. Microsoft maintains a list of requirements for signing UEFI binaries, but the process is a bit obscure, Smolár says. "I don't know if it involves only running through this list of requirements, or if there are some other activities involved, like manual binary reviews where they look for not necessarily malicious, but insecure behavior," he says.


How CISOs Can Build a Disaster Recovery Skillset

In a world of third-party risk, human error, and motivated threat actors, even the best prepared CISOs cannot always shield their enterprises from all cybersecurity incidents. When disaster strikes, how can they put their skills to work? “It is an opportunity for the CISO to step in and lead,” says Erwin. “That's the most critical thing a CISO is going to do in those incidents, and if the CISO isn't capable doing that or doesn't show up and shape the response, well, that's an indication of a problem.” CISOs, naturally, want to guide their enterprises through a cybersecurity incident. But disaster recovery skills also apply to their own careers. “I don't see a world where CISOs don't get some blame when an incident happens,” says Young. There is plenty of concern over personal liability in this role. CISOs must consider the possibility of being replaced in the wake of an incident and potentially being held personally responsible. “Do you have parachute packages like CEOs do in their corporate agreements for employability when they're hired?” Young asks. “I also see this big push of not only … CISOs on the D&O insurance, but they're also starting to acquire private liability insurance for themselves directly.”


Site Reliability Engineering Teams Face Rising Challenges

While AI adoption continues to grow, it hasn't reduced operational burdens as expected. Performance issues are now considered as critical as complete outages. Organizations are also grappling with balancing release velocity against reliability requirements. ... Daoudi suspects that there are a series of contributing factors that have led to the unexpected rise in toil levels. The first is AI systems maintenance: AI systems themselves require significant maintenance, including updating models and managing GPU clusters. AI systems also often need manual supervision due to subtle and hard-to-predict errors, which can increase the operational load. Additionally, the free time created by expediting valuable activities through AI may end up being filled with toilsome tasks, he said. "This trend could impact the future of SRE practices by necessitating a more nuanced approach to AI integration, focusing on balancing automation with the need for human oversight and continuous improvement," Daoudi said. Beyond AI, Daoudi also suspects that organizations are incorrectly evaluating toolchain investments. In his view, despite all the investments in inward-focused application performance management (APM) tools, there are still too many incidents, and the report shows a sentiment for insufficient observability instrumentation.


The Hidden Cost of Open Source Waste

Open source inefficiencies impact organizations in ways that go well beyond technical concerns. First, they drain productivity. Developers spend as much as 35% of their time untangling dependency issues or managing vulnerabilities — time that could be far better spent building new products, paying down technical debt, or introducing automation to drive cost efficiencies. ... Outdated dependencies compound the challenge. According to the report, 80% of application dependencies remain un-upgraded for over a year. While not all of these components introduce critical vulnerabilities, failing to address them increases the risk of undetected security gaps and adds unnecessary complexity to the software supply chain. This lack of timely updates leaves development teams with mounting technical debt and a higher likelihood of encountering issues that could have been avoided. The rapid pace of software evolution adds another layer of difficulty. Dependencies can become outdated in weeks, creating a moving target that’s hard to manage without automation and actionable insights. Teams often play catch-up, deepening inefficiencies and increasing the time spent on reactive maintenance. Automation helps bridge this gap by scanning for risks and prioritizing high-impact fixes, ensuring teams focus on the areas that matter most.


The Virtualization Era: Opportunities, Challenges, and the Role of Hypervisors

Choosing the most appropriate hypervisor requires thoughtful consideration of an organization’s immediate needs and long-term goals. Scalability is a crucial factor, as the selected solution must address current workloads and seamlessly adapt to future demands. A hypervisor that integrates smoothly with an organization’s existing IT infrastructure reduces the risks of operational disruptions and ensures a cost-effective transition. Equally important is the financial aspect, where businesses must look beyond the initial licensing fees to account for potential hidden costs, such as staff training, ongoing support, and any necessary adjustments to workflows. The quality of support the vendor provides, coupled with the strength of the user community, can significantly influence the overall experience, offering critical assistance during implementation and beyond. For many businesses, partnering with Managed Service Providers (MSPs) brings an added layer of expertise, ensuring that the chosen solution delivers maximum value while minimizing risk. The ongoing evolution and transformation of the virtualization market presents both challenges and opportunities. As the foundation for IT efficiency and flexibility, hypervisors remain central to these changes.

 

DORA’s Deadline Looms: Navigating the EU’s Mandate for Threat Led Penetration Testing

It’s hard to defend yourself, if you have no idea what you’re up against, and history and countless news stories are evidence that trying to defend against all manner of digital threat is a fool’s errand. As such, the first step to approaching DORA compliance is profiling not only the threat actors that target the financial services sector, but specifically which actors, and by what Tactics Techniques and Procedures (TTPs), you are likely to be attacked. However, first before you can determine how an actor may view and approach you, you need to know who you are. So, the first profile that must be built is of your own business. Not just financial services, but what sector/aspect, what region, and finally what is the specific risk profile based on the critical assets in organizational, and even partner, infrastructures. The second profile begins with the current population of known actors that target the financial services industry. It then moves to narrowing to the actors known to be aligned with the specific targeting profile. From there, leveraging industry standard models such as the MITRE ATT&CK framework, a graph is created of each actor/group’s understood goals and TTPs, including their traditional and preferred methods of access and exploitation, as well as their capabilities for evasion, persistence and command and control.


With AGI looming, CIOs stay the course on AI partnerships

“The immediate path for CIOs is to leverage gen AI for augmentation rather than replacement — creating tools that help human teams make smarter, faster decisions,” Nardecchia says. “There are very promising results with causal AI and AI agents that give an autonomous-like capability and most solutions still have a human in the loop.” Matthew Gunkel, CIO of IT Solutions at the University of California at Riverside, agrees that IT organizations should keep moving forward regardless of the growing delta between AI technology milestones and actual AI implementations. ... “The rapid advancements in AI technology, including projections for AGI and ACI, present a paradox: While the technology races ahead, enterprise adoption remains in its infancy. This divergence creates both challenges and opportunities for CIOs, employees, and AI vendors,” Priest says. “Rather than speculating on when AGI/ACI will materialize, CIOs would be best served to focus on what preparation is required to be ready for it and to maximize the value from it.” Sid Nag, vice president at Gartner, agrees that CIOs should train their attention on laying the foundation for AI and addressing important matters such as privacy, ethics, legal issues, and copyright issues, rather than focus on AGI advances.



Quote for the day:

"When you practice leadership,The evidence of quality of your leadership, is known from the type of leaders that emerge out of your leadership" -- Sujit Lalwani

Daily Tech Digest - September 21, 2024

Quantinuum Scientists Successfully Teleport Logical Qubit With Fault Tolerance And Fidelity

This research advances quantum computing by making teleportation a reliable tool for quantum systems. Teleportation is essential in quantum algorithms and network designs, particularly in systems where moving qubits physically is difficult or impossible. By implementing teleportation in a fault-tolerant manner, Quantinuum’s research brings the field closer to practical, large-scale quantum computing systems. The fidelity of the teleportation also suggests that future quantum networks could reliably transmit quantum states over long distances, enabling new forms of secure communication and distributed quantum computing. The use of QEC in these experiments is especially promising, as error correction is one of the key challenges in making quantum computing scalable. Without fault tolerance, quantum states are prone to errors caused by environmental noise, making complex computations unreliable. The fact that Quantinuum achieved high fidelity using real-time QEC demonstrates the increasing maturity of its hardware and software systems.


Adversarial attacks on AI models are rising: what should you do now?

Adversarial attacks on ML models look to exploit gaps by intentionally attempting to redirect the model with inputs, corrupted data, jailbreak prompts and by hiding malicious commands in images loaded back into a model for analysis. Attackers fine-tune adversarial attacks to make models deliver false predictions and classifications, producing the wrong output. ... Disrupting entire networks with adversarial ML attacks is the stealth attack strategy nation-states are betting on to disrupt their adversaries’ infrastructure, which will have a cascading effect across supply chains. The 2024 Annual Threat Assessment of the U.S. Intelligence Community provides a sobering look at how important it is to protect networks from adversarial ML model attacks and why businesses need to consider better securing their private networks against adversarial ML attacks. ... Machine learning models can be manipulated without adversarial training. Adversarial training uses adverse examples and significantly strengthens a model’s defenses. Researchers say adversarial training improves robustness but requires longer training times and may trade accuracy for resilience.


4 ways to become a more effective business leader

Delivering quantitative results isn't the only factor that defines effective leaders -- great managers also possess the right qualitative skills, including being able to communicate and collaborate with their peers. "Once you reach that higher level in the business, particularly if you are part of the executive committee, you need to know how to deal with corporate politics," said Vogel. Managers must recognize that underlying corporate politics can be made with social motivations in mind. Great leaders see the signs. "If you're unable to read the room and understand and navigate that context, it's going to be tough," said Vogel. ... The rapid pace of change in modern organizations represents a huge challenge for all business leaders. Vogel instructed would-be executives to keep learning. "Especially at the moment, and the world we work in, you need to upskill yourself," she said. "We have had so much change happening in the business."Vogel said technology is a key factor in the rapid pace of change. The past two years have seen huge demands for Gen AI and machine learning. In the future, technological innovations around blockchain, quantum computing, and robotics will lead to more pressure for digital transformation.


Cloud architects: Try thinking like a CFO

Cloud architects must cut through the hype and focus on real-world applications and benefits. More than mere technological enhancement is required; architects must make a clear financial case. This is particularly apt in environments where executive decision-makers demand justification for every technology dollar spent. Aligning cloud architecture strategies with business outcomes requires architects to step beyond traditional roles and strategically engage with critical financial metrics. For example, reducing operational expenses through efficient cloud resource management will directly impact a company’s bottom line. A successful cloud architect will provide CFOs with predictive analytics and cost-saving projections, demonstrating clear business value and market advantage. Moreover, the increasing pressure on businesses to operate sustainably allows architects to leverage the cloud’s potential for greener operations. These are often strategic wins that CFOs can directly appreciate in terms of corporate financial and social governance metrics. However, when I bring up the topic of sustainability, I receive a lot of nods, but few people seem to care. 


Wherever There's Ransomware, There's Service Account Compromise. Are You Protected?

Most service accounts are created to access other machines. That inevitably implies that they have the required access privileges to log-in and execute code on these machines. This is exactly what threat actors are after, as compromising these accounts would render them the ability to access and execute their malicious payload. ... Some service accounts, especially those that are associated with an installed on-prem software, are known to the IT and IAM staff. However, many are created ad-hoc by IT and identity personnel with no documentation. This makes the task of maintaining a monitored inventory of service accounts close to impossible. This plays well in attackers' hands as compromising and abusing an unmonitored account has a far greater chance of going undetected by the attack's victim. ... The common security measures that are used for the prevention of account compromise are MFA and PAM. MFA can't be applied to service accounts because they are not human and don't own a phone, hardware token, or any other additional factor that can be used to verify their identity beyond their username and passwords. PAM solutions also struggle with the protection of service accounts.


Datacenters bleed watts and cash – all because they're afraid to flip a switch

The good news is CPU vendors have developed all manner of techniques for managing power and performance over the years. Many of these are rooted in mobile applications, where energy consumption is a far more important metric than in the datacenter. According to Uptime, these controls can have a major impact on system power consumption and don't necessarily have to kneecap the chip by limiting its peak performance. The most power efficient of these regimes, according to Uptime, are software-based controls, which have the potential to cut system power consumption by anywhere from 25 to 50 percent – depending on how sophisticated the operating system governor and power plan are. However, these software-level controls also have the potential to impart the biggest latency hit. This potentially makes these controls impractical for bursty or latency-sensitive jobs. By comparison, Uptime found that hardware-only implementations designed to set performance targets tend to be far faster when switching between states – which means a lower latency hit. The trade-off is the power savings aren't nearly as impressive, topping out around ten percent.


An AI-Driven Approach to Risk-Scoring Systems in Cybersecurity

The integration of AI into risk-scoring systems also enhances the overall security strategy of an organization. These systems are not static, but rather learn and adapt over time, becoming increasingly effective as they encounter new threat patterns and scenarios. This adaptive capability is crucial in the face of rapidly evolving cyber threats, allowing organizations to stay one step ahead of potential attackers. An example of this in action is detecting anomalies during user sign-on by analyzing physical attributes and comparing them to typical behavior patterns. ... It's important, however, to realize that AI is not a cure-all for every cybersecurity challenge. The most impactful strategies combine the analytical power of AI with human expertise. While AI excels at processing vast amounts of data and identifying patterns, human analysts provide critical contextual understanding and decision-making capabilities. It's crucial for AI systems to continuously learn from the input of small and medium-sized enterprises (SMEs) through a feedback loop to refine their accuracy and minimize alert fatigue; this collaboration between human and artificial intelligence creates a robust defense against a wide range of cyber threats.


API Security in Financial Services: Navigating Regulatory and Operational Challenges

API breaches can have devastating consequences, including data loss, brand damage, financial losses, and customer attrition. For example, a breach that exposes customer account information can lead to financial theft and identity fraud. The reputational damage from such incidents can result in loss of customer trust and increased scrutiny from regulators. Institutions must recognize the potential fallout from breaches and take proactive steps to mitigate these risks, understanding that the cost of breaches often far exceeds the investment in robust security measures. ... Common security controls such as encryption, data loss prevention, and web application firewalls are widely used, yet their effectiveness remains limited. The report indicates that 45% of financial institutions can only prevent half or fewer API attacks, underscoring the need for improved security strategies and tools. Encryption, while essential, only protects data at rest and in transit, leaving APIs vulnerable to other types of attacks like injection and denial-of-service. Further, data loss prevention systems often struggle to keep pace with the volume and complexity of API traffic.


Guide To Navigating the Legal Perils After a Cyber Incident

Cyber incidents pose significant technical challenges, but the real storm often hits after the breach gets contained, Nall said. That’s when regulators step in to scrutinize every decision made in the heat of the crisis. While scrutiny has traditionally focused on corporate leadership or legal departments, today, infosec workers risk facing charges of fraud, negligence, or worse, simply for doing their jobs. ... Instead of clear, universal cybersecurity standards, regulatory bodies like the SEC only define acceptable practices after a breach occurs, Nall said. This reactive approach puts CISOs and other infosec workers at a distinct disadvantage. "Federal prosecutors and SEC attorneys read the paper like anyone else, and when they see bad things happening, like major breaches, especially where there is a delay in disclosure, they have to go after those companies," Nall explained during her presentation. ... Fortunately, CISOs and other infosec workers can take several concrete steps to protect their careers and reputations. By implementing airtight communication practices and negotiating solid legal protections, they can navigate the fallout of a disastrous cyber incident. 


As the AI Bubble Deflates, the Ethics of Hype Are in the Spotlight

One of the major problems we’re seeing right now in the AI industry is the overpromising of what AI tools can actually do. There’s a huge amount of excitement around AI’s observational capacities, or the notion that AI can see things that are otherwise unobservable to the human eye due to these tools’ ability to discern trends from huge amounts of data. However, these observational capacities are not only overstated, but also often completely misleading. They lead to AI being attributed almost magical powers, whereas in reality a large number of AI products grossly underperform compared to what they’re promised to do. ... So, the true believers caught up in the promises and excitement are likely to be disappointed. But throughout the hype cycle, many notable figures including practitioners and researchers have challenged narratives about the unconstrained transformational potential of AI. Some have expressed alarm at the mechanisms, techniques, and behavior at play which allowed such unbridled fervour to override the healthy caution necessary ahead of the deployment of any emerging technology, especially one which has the potential for such massive societal, environmental, and social upheaval.



Quote for the day:

“Start each day with a positive thought and a grateful heart.” -- Roy T. Bennett