Daily Tech Digest - June 30, 2024

The Unseen Ethical Considerations in AI Practices: A Guide for the CEO

AI’s “black box” problem is well-known, but the ethical imperative for transparency goes beyond just making algorithms understandable and its results explainable. It’s about ensuring that stakeholders can comprehend AI decisions, processes, and implications, guaranteeing they align with human values and expectations. Recent techniques, such as reinforcement learning from human feedback (RLHF) that aligns AI outcomes to human values and preferences, confirm that AI-based systems behave ethically. This means developing AI systems in which decisions are in accordance with human ethical considerations and can be explained in terms that are comprehensible to all stakeholders, not just the technically proficient. Explainability empowers individuals to challenge or correct erroneous outcomes and promotes fairness and justice. Together, transparency and explainability uphold ethical standards, enabling responsible AI deployment that respects privacy and prioritizes societal well-being. This approach promotes trust, and trust is the bedrock upon which sustainable AI ecosystems are built.Long-


Cyber resilience - how to achieve it when most businesses – and CISOs – don’t care

Organizations should ask themselves some serious, searching questions about why they are driven to keep doing the same thing over and over again – while spending millions of dollars in the process. As Bathurst put it: Why isn't security by design built in at the beginning of these projects, which are driving people to make the wrong decisions – decisions that nobody wants? Nobody wants to leave us open to attack. And nobody wants our national health infrastructure, ... But at this point, we should remind ourselves that, despite that valuable exercise, both the Ministry of Defence and the NHS have been hacked and/or subjected to ransomware attacks this year. In the first case, via a payroll system, which exposed personal data on thousands of staff, and in the second, via a private pathology lab. The latter incursion revealed patient blood-test data, leading to several NHS hospitals postponing operations and reverting to paper records. So, the lesson here is that, while security by design is essential for critical national infrastructure, resilience in the networked, cloud-enabled age must acknowledge that countless other systems, both upstream and downstream, feed into those critical ones.


Prominent Professor Discusses Digital Transformation, the Future of AI, Tesla, and More

“Customers are always going to have some challenges, and there are constant new technological trends evolving. Digital transformation is about intentionally moving towards making the experience more personalized by weaving new technology applications to solve customer challenges and deliver value,” shared Krishnan. However, as machine learning and GenAI help companies personalize their products and services, the tools themselves are also becoming more niche. “I think we’ll move to more domain and industry-specific generative AI and large language models. The healthcare industry will have an LLM, consumer packaged goods, education, etc,” shared Krishnan. “However, because companies will protect their own data, every large organization will create its own LLM with the private data. That’s why generative AI is interesting because it can actually get to be more personalized while also leveraging the broader knowledge. Eventually, we may all have our own individual GPTs.” ... Although new technologies such as GenAI and machine learning have had an immense impact in such a short time, Krishnan warns that guardrails are necessary, especially as our use of these tools becomes more essential.


Enhancing Your Company’s DevEx With CI/CD Strategies

Cognitive load is the amount of mental processing necessary for a developer to complete a task. Companies generally have one programming language that they use for everything. Their entire toolchain and talent pool is geared toward it for maximum productivity. On the other hand, CI/CD tools often have their own DSL. So, when developers want to alter the CI/CD configurations, they must get into this new rarely-used language. This becomes a time sink as well as causes a high cognitive load. One of the ways to avoid giving developers high cognitive load tasks without reason is to pick CI/CD tools that use a well-known language. For example, the data serialization language YAML — not always the most loved — is an industry standard that developers would know how to use. ... In software engineering, feedback loops can be measured by how quickly questions are answered. Troubleshooting issues within a CI/CD pipeline can be challenging for developers due to the need for more visibility and information. These processes often operate as black boxes, running on servers that developers may not have direct access to with software that is foreign to developers. 


Digital Accessibility: Ensuring Inclusivity in an Online World

"It starts by understanding how people with disabilities use your online platform," he said. While the accessibility issues faced by people who are blind receive considerable attention, it's crucial to address the full spectrum of disabilities that affect technology use, including auditory, cognitive, neurological, physical, speech, and visual disabilities, Henry added. ... The key is to review accessibility during content creation with a diverse group of people and address their feedback in iterations early and often. Bhowmick added that accessibility testing should always be run according to a structured testing script and mature testing methodologies to ensure reliable, reproducible, and sustainable test results. It is important to run accessibility testing during every stage of the software lifecycle: during design, before handing over the design to development, during development, and after development. A professional and thorough testing should take place before releasing the product to customers, Bhowmick said, and the test results should be made available in an accessibility conformance report (ACR) following the Voluntary Product Accessibility Template (VPAT) format.


How Cloud-Native Development Benefits SaaS

Cloud-native practices, patterns, and technologies enhance the benefits of SaaS and COTS while reducing the inherent negatives by:Providing an extensible framework for adding new capabilities to commercial applications without having to customize the core product. Leveraging API and event-driven architecture to bypass the need for custom data integrations. Still offloading the complexity of most infrastructure and security concerns to a provider while gaining additional flexibility in scale and resilience implementation. Enabling opportunities to innovate core business systems with emerging technologies such as generative AI. Enterprises relying on SaaS or COTS still need the flexibility to meet their ever-evolving business requirements. As we have seen with advances in AI over the past year, change and opportunity can arrive quickly and without warning. Chances are that your organization is already on a journey to cloud-native maturity, so take advantage of this effort by implementing technologies and patterns, like leveraging event-driven architectures and serverless functions to extend your commercial applications rather than customizing or replacing them.


Cybersecurity as a Service Market: A Domain of Innumerable Opportunities

Although traditional cybersecurity differs from cybersecurity as a service. As per the budget, size, and regulatory compliance requirements, several approaches are required. Organizations are finding it tedious to rely completely on themselves. The conventional method of fabricating an internal security team is to hire an experienced security staff who are dedicated to performing cyber security duties. While CSaaS is an option where the company outsources the security facility. A survey found that almost 72.1% of businesses find CSaaS solutions critical for their customer strategy. Let us now understand cyber security as a service market growth aspect. ... Some of the challenges in the market growth are lack of training and inadequate workforce, limited security budget among SMEs, and lack of interoperability with the information. The market in North America currently accounts for the maximum share of the revenue of the worldwide market. The growth of the market can be attributed to the high level of digitalization and the surge in the number of connected devices in the countries is projected to remain growth-propelling factors. 


Top 5 (EA) Services Every Team Lead Should Know

The topic of sustainability is on everyone’s priority list these days. It has become an integral part of sociopolitical and global concepts. Not to mention, more and more customers are asking for sustainable products and services. Or alternatively, they only want to buy from companies that act and operate sustainably themselves. Sustainability must therefore be on the strategic agenda of every company. ... To effectively collaborate with your enterprise IT and ensure the best possible support while you’re making IT-related investment decisions, your IT service providers require feedback. For this, your list of software applications must be known. Deficits and opportunities for improvement need to be identified and, above all, a coordinated investment strategy for your IT services is a must. It has to be clear how you can use your IT budget in the most efficient way. ... What do all these different services have to do with EA? A lot. If the above-mentioned services are understood as EA services, their results form a valuable contribution to the creation of a holistic view of your company – the enterprise architecture.


Ensuring Comprehensive Data Protection: 8 NAS Security Best Practices

NAS devices are convenient to use as shared storage, which means they should be connected to other nodes. Normally, those nodes are the machines inside an organization’s network. However, the growing number of gadgets per employee can lead to unintentional external connections. Internet of Things (IoT) devices are a separate threat category. Hackers can target these devices and then use them to propagate malicious codes inside corporate networks. If you connect such a device to your NAS, you risk compromising NAS security and then suffering a cyberattack. ... Malicious software remains a ubiquitous threat to any node connected to the network. Malware can steal, delete, and block access to NAS data or intercept incoming and outgoing traffic. Furthermore, the example of Stuxnet shows that powerful computer worms can disrupt and disable IT hardware or even entire production clusters. Insider threats. When planning an organization’s cybersecurity, IT experts reasonably focus on outside threats.


How to design the right type of cyber stress test for your organisation

The success of a cyber stress test largely depends on the realism and relevance of the scenarios and attack vectors used. These should be based on a thorough understanding of the current threat landscape, industry-specific risks, and emerging trends. Scenarios may range from targeted phishing campaigns and ransomware attacks to sophisticated, state-sponsored intrusions. By selecting scenarios that are plausible and aligned with your organisation’s risk profile, you can ensure that the stress test provides valuable insights and prepares your team for real-world challenges. ... A well-designed cyber stress test should encompass a range of activities, from table-top exercises and digital simulations to red team-blue team engagements and penetration testing. This multi-faceted approach allows you to assess the organisation’s capabilities across various domains, including detection, investigation, response, and recovery. Additionally, the stress test should include a thorough evaluation process, with clearly defined success criteria and mechanisms for gathering feedback and lessons learned.



Quote for the day:

“I'd rather be partly great than entirely useless.” -- Neal Shusterman

Daily Tech Digest - June 29, 2024

Urban Digital Twins: AI Comes To City Planning

Urban digital twin technology involves various tools and methods at each lifecycle phase, and because it is still an emerging field, there's a wide range of variability in available solutions. Different providers may focus on different aspects of the technology, offer varying levels of complexity, or specialize in specific use cases or lifecycle phases. Therefore, it's essential for organizations to carefully evaluate their requirements and compare the offerings of different providers to find the best fit for their specific needs. To make the most of urban digital twin technology, city officials and urban planners should first get a solid grasp on what it can do and the benefits it offers throughout a city's development. By aligning city goals to the capabilities of digital twin solutions at each lifecycle stage, teams can make sure they're picking the right tools for their specific needs. This way, cities can tailor their approach to urban digital twins, ensuring they're making the best choices to reach their desired outcomes and create a smarter, more efficient urban environment.


Empowering Citizen Developers With Low- and No-Code Tools

Whether you are building your own codeless platform or adopting a ready-to-use solution, the benefits can be immense. But before you begin, remember that the core of any LCNC platform is the ability to transform a user's visual design into functional code. This is where the real magic happens, and it's also where the biggest challenges lie. For an LCNC platform to help you achieve success, you need to start with a deep understanding of your target users. What are their technical skills? What kind of applications do they want to use? The answers to these questions will inform every aspect of your platform's design, from the user interface/user experience (UI/UX) to the underlying architecture. The UI/UX is crucial for the success of any LCNC platform, but it is just the tip of the iceberg. Under the hood, you'll need a powerful engine that can translate visual elements into clean, efficient code. This typically involves complex AI algorithms, data structures, and a deep understanding of various programming languages. 


Will AI replace cybersecurity jobs?

While AI and ML can streamline many cybersecurity processes, organizations cannot remove the human element from their cyberdefense strategies. Despite their capabilities, these technologies have limitations that often require human insight and intervention, including a lack of contextual understanding and susceptibility to inaccurate results, adversarial attacks and bias. Because of these limitations, organizations should view AI as an enhancement, not a replacement, for human cybersecurity expertise. AI can augment human capabilities, particularly when dealing with large volumes of threat data, but it cannot fully replicate the contextual understanding and critical thinking that human experts bring to cybersecurity. ... AI can automate threat detection and analysis by scanning massive volumes of data in real time. AI-powered threat detection tools can swiftly identify and respond to cyberthreats, including emerging threats and zero-day attacks, before they breach an organization's network. AI tools can also combat insider threats, a significant concern for modern organizations.


Decoding OWASP – A Security Engineer’s Roadmap to Application Security

While the OWASP Top 10 provides a foundational framework for understanding and addressing the most critical web application security risks, OWASP offers a range of other resources that can be instrumental in developing and refining an application security strategy. These include the OWASP Testing Guide, Cheat Sheets, and a variety of tools and projects designed to aid in the practical aspects of security implementation. OWASP Testing Guide – The OWASP Testing Guide is a comprehensive resource that offers a deep dive into the specifics of testing web applications for security vulnerabilities. It covers a wide array of potential vulnerabilities beyond the Top 10, providing guidance on how to rigorously test and validate each one. ... OWASP Cheat Sheets – The OWASP Cheat Sheets are concise, focused guides containing the best practices on a specific security topic. They serve as handy guides for security teams and developers to quickly reference when implementing security measures.Cheat sheets can be used as training materials to educate developers and security professionals on specific security issues and how to mitigate them.


Intel Demonstrates First Fully Integrated Optical I/O Chiplet

The fully Integrated OCI chiplet leverages Intel’s field-proven silicon photonics technology and integrates a silicon photonics integrated circuit (PIC), which includes on-chip lasers and optical amplifiers, with an electrical IC. The OCI chiplet demonstrated at OFC was co-packaged with an Intel CPU but can also be integrated with next-generation CPUs, GPUs, IPUs and other system-on-chips (SoCs). This first OCI implementation supports up to 4 terabits per second (Tbps) bidirectional data transfer, compatible with peripheral component interconnect express (PCIe) Gen5. The live optical link demonstration showcases a transmitter (Tx) and receiver (Rx) connection between two CPU platforms over a single-mode fiber (SMF) patch cord. ... The current chiplet supports 64 channels of 32 Gbps data in each direction up to 100 meters (though practical applications may be limited to tens of meters due to time-of-flight latency), utilizing eight fiber pairs, each carrying eight dense wavelength division multiplexing (DWDM) wavelengths.


Artificial General Intelligence (AGI): Understanding the Milestones

It was first proposed in the early 1900s to create a machine or a program that was capable of thinking and acting more like a person. The Turing Test, designed by Alan Turing in 1950 to assess intelligence comparable to that of humans, established the scenario. ... Machine learning emerged in the 1950s and 1960s as a result of statistical algorithms that could identify patterns in data and use them to make future decisions without external supervision. ... The Expert systems and symbolic AI centered on the encoding of knowledge and the application of rules and symbols in human reasoning. ... Deep learning, a subset of machine learning, has been a crucial breakthrough in the journey toward AGI. In tasks like speech and image recognition, Convolutional Neural Networks and Recurrent Neural Networks perform at a human-level of intelligence. ... AGI research has produced numerous important results, ranging from theoretical foundations to deep learning advances. Even if AGI remains ideal, present AI research is pushing the envelope, imagining a time when AI would fundamentally revolutionize our way of life and work for the better.


Unlocking Innovation: How Critical Thinking Supercharges Design Thinking

Critical thinking involves the objective analysis and evaluation of an issue to form a judgment. It's about questioning assumptions, discerning hidden values, evaluating evidence, and assessing conclusions. This methodical approach is crucial in professional environments for making informed decisions, solving complex problems, and planning strategically. ... Design thinking is a human-centered approach to innovation that integrates the needs of people, the possibilities of technology, and the requirements for business success. It involves five key stages: Empathize, Define, Ideate, Prototype, and Test. Design thinking promotes creativity, collaborative effort, and iterative learning. Merging critical thinking into the design thinking process enhances each stage with thorough analysis and robust evaluation, leading to innovative and effective solutions. ... Critical thinking provides the analytical rigor needed to identify core issues and evaluate solutions, while design thinking fosters creativity and user-centered design.


DAST Vs. Penetration Testing: Comprehensive Guide to Application Security Testing

Dynamic Application Security Testing (DAST) and penetration testing are crucial for identifying and mitigating security vulnerabilities in web application security. While both aim to enhance application security, they differ significantly in their approach, execution, and outcomes. ... Dynamic Application Security Testing (DAST) is an automated security testing methodology that interacts with a running web application to identify potential security vulnerabilities. DAST tools simulate real-world attacks by injecting malicious code or manipulating data, focusing on uncovering vulnerabilities that attackers could exploit. DAST evaluates the effectiveness of security controls within the application. ... Penetration testing is a security assessment process by skilled professionals, often called ethical hackers. While comprehensive and carried out by experienced professionals, manual testing can be time-consuming and expensive. These experts simulate real-world attacks to identify and exploit application, network, or system vulnerabilities. 


There is no OT apocalypse, but OT security deserves more attention

The whole narrative surrounding attacks on OT environments is therefore quite exaggerated as far as Van der Walt is concerned. “We are not in the OT apocalypse,” in his words. This is important to know, he believes. “In fact, there is a narrative in the market that is out to get organizations to take action and invest.” In other words, we hear more and more that OT environments are under constant attack. At the end of the day, these are actually attacks on organizations’ IT environments. ... “There does exist a very frightening risk that attackers can take over the OT environment,” as Derbyshire puts it. To demonstrate that, he has set up an attack and published about it in scientific circles. This should result in a better understanding of a real OT ransomware attack. ... Finally, it is worth noting that OT security does need more attention.  Above all, they want to contribute to the discussion about what an OT attack really is. As Van der Walt summarizes, “IT security has been around for about 25 years, OT security is still very young. We should have learned from our mistakes, so it shouldn’t take another 25 years to get OT security to where IT security is today.”


Manage AI threats with the right technology architecture

Amidst the dynamic market conditions, choosing a future-proof technology architecture for threat management becomes almost inevitable. This underscores the necessity of selecting the best technologies and the right strategic approach. ... The best-of-breed approach allows companies to respond flexibly to new threats and changes in business requirements. When a new technology comes to market, companies can easily integrate it without overhauling their entire security architecture. This promotes agile adaptation and quick implementation of new solutions to stay current with the latest technology. ... Managing an integrated platform is less complex than managing multiple independent systems. This reduces the training requirements for security staff and minimizes the risk of errors arising from the complexity of integrating different systems. ... Ultimately, the choice should efficiently meet the company’s security goals. It is crucial to invest in advanced technologies and ensure that expenditures are proportionate to the risk. This means that investments should be carefully weighed without incurring unnecessary costs.



Quote for the day:

"Most people live with pleasant illusions, but leaders must deal with hard realities." -- Orrin Woodward

Daily Tech Digest - June 28, 2024

AI success: Real or hallucination?

The biggest problem may not be compliance muster, but financial muster. If AI is consuming hundreds of thousands of GPUs per year, requiring that those running AI data centers canvas frantically in search of the power needed to drive these GPUs and to cool them, somebody is paying to build AI, and paying a lot. Users report that the great majority of the AI tools they use are free. Let me try to grasp this; AI providers are spending big to…give stuff away? That’s an interesting business model, one I personally wish was more broadly accepted. But let’s be realistic. Vendors may be willing to pay today for AI candy, but at some point AI has to earn its place in the wallets of both supplier and user CFOs, not just in their hearts. We have AI projects that have done that, but most CIOs and CFOs aren’t hearing about them, and that’s making it harder to develop the applications that would truly make the AI business case. So the reality of AI is buried in hype? It sure sounds like AI is more hallucination than reality, but there’s a qualifier. Millions of workers are using AI, and while what they’re currently doing with it isn’t making a real business case, that’s a lot of activity.


Space: The Final Frontier for Cyberattacks

"Since failing to imagine a full range of threats can be disastrous for any security planning, we need more than the usual scenarios that are typically considered in space-cybersecurity discussions," Lin says. "Our ICARUS matrix fills that 'imagineering' gap." Lin and the other authors of the report — Keith Abney, Bruce DeBruhl, Kira Abercromby, Henry Danielson, and Ryan Jenkins — identified several factors as increasing the potential for outer space-related cyberattacks over the next several years and decades. Among them is the rapid congestion of outer space in recent years as the result of nations and private companies racing to deploy space technologies; the remoteness of space; and technological complexity. ... The remoteness — and vastness of space — also makes it more challenging for stakeholders — both government and private — to address vulnerabilities in space technologies. There are numerous objects that were deployed into space long before cybersecurity became a mainstream concern that could become targets for attacks.


The perils of overengineering generative AI systems

Overengineering any system, whether AI or cloud, happens through easy access to resources and no limitations on using those resources. It is easy to find and allocate cloud services, so it’s tempting for an AI designer or engineer to add things that may be viewed as “nice to have” more so than “need to have.” Making a bunch of these decisions leads to many more databases, middleware layers, security systems, and governance systems than needed. ... We need to account for future growth,” but this can often be handled by adjusting the architecture as it evolves. It should never mean tossing money at the problems from the start. This tendency to include too many services also amplifies technical debt. Maintaining and upgrading complex systems becomes increasingly difficult and costly. If data is fragmented and siloed across various cloud services, it can further exacerbate these issues, making data integration and optimization a daunting task. Enterprises often find themselves trapped in a cycle where their generative AI solutions are not just overengineered but also need to be more optimized, leading to diminished returns on investment.
Data fabric is a design concept for integrating and managing data. Through flexible, reusable, augmented, and sometimes automated data integration, or copying of data into a desired target database, it facilitates data access across the business and data analysts. ... Physically moving data can be tedious, involving planning, modeling, and developing ETL/ELT pipelines, along with associated costs. However, a data fabric abstracts these steps, providing capabilities to copy data to a target database. Analysts can then replicate the data with minimal planning, reduced data silos, and enhanced data accessibility and discovery. Data fabric is an abstracted semantic-based data capability that provides the flexibility to add new data sources, applications, and data services without disrupting existing infrastructure. ... As the data volume increases, the fabric adapts without compromising efficiency. Data fabric empowers organizations to leverage multiple cloud providers. It facilitates flexibility, avoids vendor lock-in, and accommodates future expansion across different cloud environments.


DFIR and its role in modern cybersecurity

In incident response, digital forensics provides detailed insights to highlight the cause and sequence of events in breaches. This data is vital for successful containment, eradication of the danger, and recovery. Conducting post-incident forensic reports can similarly enhance security by pinpointing system vulnerabilities and suggesting actions to prevent future breaches. Incorporating digital forensics into incident response essentially allows you to examine incidents thoroughly, leading to faster recovery, enhanced security measures, and increased resilience to cyber threats. This partnership improves your ability to identify, evaluate, and address cyber threats thoroughly. ... Emerging trends and technologies are shaping the future of DFIR in cybersecurity. Artificial intelligence and machine learning are increasing the speed and effectiveness of threat detection and response. Cloud computing is revolutionising processes with its scalable options for storing and analysing data. Additionally, improved coordination with other cybersecurity sectors, such as threat intelligence and network security, leads to a more cohesive defence plan.


Ensuring Application Security from Design to Operation with DevSecOps

DevSecOps is as much about cultural transformation as it is about tools and processes. Before diving into technical integrations, ensure your team’s mindset aligns with DevSecOps principles. Underestimating the cultural aspects, such as resistance to change, fear of increased workload or misunderstanding the value of security, can impede adoption. You can address these challenges by highlighting the benefits of DevSecOps, celebrating successes and promoting a culture of learning and continuous improvement. Developers should be familiar with the nuances of the security tools in use and how to interpret their outputs. ... DevSecOps is a journey, not a destination. Regularly review the effectiveness of your tool integrations and workflows. Gather feedback from all stakeholders and define metrics to measure the effectiveness of your DevSecOps practices, such as the number of vulnerabilities identified and remediated, the time taken to fix critical issues and the frequency of zero-day attacks and other security incidents. 


Essential skills for leaders in industry 4.0

Agility enables swift adaptation to new technologies and market shifts, keeping your organisation competitive and innovative. Digital leaders must capitalise on emerging opportunities and navigate disruptions such as technological advancements, shifting consumer preferences, and increased global competition. ... Effective communication is vital for digital leadership, especially when implementing organisational change. Inspiring positive, incremental change requires empowering your team to work towards common business goals and objectives. Key communication skills include clarity, precision, active listening, and transparency. ... Empathy is essential for guiding your team through digital transformation. True adoption demands conviction from top leaders and a determined spirit throughout the organisation. Success lies in integrating these concepts into the company’s operations and culture. Acknowledge that change can be overwhelming, and by addressing employees' stressors proactively, you can secure their support for strategic initiatives. ... Courage is indispensable for digital leaders, requiring the embrace of risk to ensure success. 


Platform as a Runtime - The Next Step in Platform Engineering

It is almost impossible to ensure that all developers 100% comply with all the system's non-functional requirements. Even a simple thing like input validations may vary between developers. For instance, some will not allow Nulls in a string field, while others allow Nulls, causing inconsistency in what is implemented across the entire system. Usually, the first step to aligning all developers on best practices and non-functional requirements is documentation, build and lint rules, and education. However, in a complex world, we can’t build perfect systems. When developers need to implement new functionality, they are faced with trade-offs they need to make. The need for standardization comes to mitigate scaling challenges. Microservices is another solution to try and handle scaling issues, but as the number of microservices grows, you will start to face the complexity of a Large-Scale Microservices environment. In distributed systems, requests may fail due to network issues. Performance is degraded since requests flow across multiple services via network communication as opposed to in-process method calls in a Monolith. 


The distant world of satellite-connected IoT

The vision is that IoT, and mobile phones, will be designed so that as they cross out of terrestrial connectivity, they can automatically switch to satellite. Devices will no longer be either or, they will be both, offering a much more reliable network as when a device loses contact with the terrestrial network and permanently available alternative can be used. “Satellite is wonderful from a coverage perspective,” says Nuttall. “Anytime you see the sky, you have satellite connectivity. The challenge lies in it being a separate device, and that ecosystem has not really proliferated or grown at scale.” Getting to that point, MacLeod predicts that we will first see people using 3GPP-type standards over satellite links, but they won’t immediately be interoperating. “Things can change, but in order to make the space segment super efficient, it currently uses a data protocol that's referred to as NIDD - non-IP-based data delivery - which is optimized for trickier links,” explains MacLeod. “But NB-IoT doesn’t use it, so the current style of addressing data communication in space isn’t mirrored by that on the ground network. Of course, that will change, but none of us knows exactly how long it will take.”


Navigating the cloud: How SMBs can mitigate risks and maximise benefits

SMBs often make several common mistakes when it comes to cloud security. By recognizing and addressing these blind spots, organizations can significantly enhance their cybersecurity. One major mistake is placing too much trust in the cloud provider. Many IT leaders assume that investing in cloud services means fully outsourcing security to a third party. However, security responsibilities are shared between the cloud service provider (CSP) and the customer. The specific responsibilities depend on the type of cloud service and the provider. Another common error is failing to back up data. Organisations should not assume that their cloud provider will automatically handle backups. It's essential to prepare for worst-case scenarios, such as system failures or cyberattacks, as lost data can lead to significant downtime, productivity, and reputation losses. Neglecting regular patching also exposes cloud systems to vulnerabilities. Unpatched systems can be exploited, leading to malware infections, data breaches, and other security issues. Regular patch management is crucial for maintaining cloud security, just as it is for on-premises systems.



Quote for the day:

"What seems to us as bitter trials are often blessings in disguise." -- Oscar Wilde

Daily Tech Digest - June 27, 2024

Is AI killing freelance jobs?

Work that has previously been done by humans, such as copywriting and developing code, is being replicated by AI-powered tools like ChatGPT and Copilot, leading many workers to anticipate that these tools may well swipe their jobs out from under them. And one population appears to be especially vulnerable: freelancers. ... While writing and coding roles were the most heavily affected freelance positions, they weren’t the only ones. For instance, the researchers found a 17% decrease in postings related to image creation following the release of DALL-E. Of course, the study is limited by its short-term outlook. Still, the researchers found that the trend of replacing freelancers has only increased over time. After splitting their nine months of analysis into three-month segments, each progressive segment saw further declines in the number of freelance job openings. Zhu fears that the number of freelance opportunities will not rebound. “We can’t say much about the long-term impact, but as far as what we examined, this short-term substitution effect was going deeper and deeper, and the demands didn’t come back,” Zhu says.


Can data centers keep up with AI demands?

As the cloud market has matured, leaders have started to view their IT infrastructure through the lens of ‘cloud economics.’ This means studying the cost, business impact, and resource usage of a cloud IT platform in order to collaborate across departments and determine the value of cloud investments. It can be a particularly valuable process for companies looking to introduce and optimize AI workloads, as well as reduce energy consumption. ... As the demand for these technologies continues to grow, businesses need to prioritize environmental responsibility when adopting and integrating AI into their organizations. It is essential that companies understand the impact of their technology choices and take steps to minimize their carbon footprint. Investing in knowledge around the benefits of the cloud is also crucial for companies looking to transition to sustainable technologies. Tech leaders should educate themselves and their teams about how the cloud can help them achieve their business goals while also reducing their environmental impact. As newer technologies like AI continue to grow, companies must prepare for the best ways to handle workloads. 


Building a Bulletproof Disaster Recovery Plan

A lot of companies can't effectively recover because they haven't planned their tech stack around the need for data recovery, which should be central to core technology choices. When building a plan, companies should understand the different ways that applications across an organization’s infrastructure are going to fail and how to restore them. ... When developing the plan, prioritizing the key objectives and systems is crucial to ensure teams don't waste time on nonessential operations. Then, ensure that the right people understand these priorities by building out and training your incident response teams with clear roles and responsibilities. Determine who understands the infrastructure and what data needs to be prioritized. Finally, ensure they're available 24/7, including with emergency contacts and after-hours contact information. While storage backups are a critical part of disaster recovery, they should not be considered the entire plan. While essential for data restoration, they require meticulous planning regarding storage solutions, versioning, and the nuances of cold storage. 


How are business leaders responding to the AI revolution?

While AI provides a potential treasure trove of possibilities, particularly when it comes to effectively using data, business leaders must tread carefully when it comes to risks around data privacy and ethical implications. ‌While the advancements of generative AI have been consistently in the news, so too have the setbacks major tech companies are facing when it comes to data use. ... “Controls are critical,” he said. “Data privileges may need to be extended or expanded to get the full value across ecosystems. However, this brings inherent risks of unintentional data transmission and data not being used for the purpose intended, so organisations must ensure strong controls and platforms that can highlight and visualise anomalies that may require attention.” ... “Enterprises must be courageous around shutting down automation and AI models that while showing some short-term gain may cause commercial and reputational damage in the future if left unchecked.” He warned that a current skills shortage in the area of AI might hold businesses back. 


AI development on a Copilot+ PC? Not yet

Although the Copilot+ PC platform (and the associated Copilot Runtime) shows a lot of promise, the toolchain is still fragmented. As it stands, it’s hard to go from model to code to application without having to step out of your IDE. However, it’s possible to see how a future release of the AI Toolkit for Visual Studio Code can bundle the QNN ONNX runtimes, as well as make them available to use through DirectML for .NET application development. That future release needs to be sooner rather than later, as devices are already in developers’ hands. Getting AI inference onto local devices is an important step in reducing the load on Azure data centers. Yes, the current state of Arm64 AI development on Windows is disappointing, but that’s more because it’s possible to see what it could be, not because of a lack of tools. Many necessary elements are here; what’s needed is a way to bundle them to give us an end-to-end AI application development platform so we can get the most out of the hardware. For now, it might be best to stick with the Copilot Runtime and the built-in Phi-Silica model with its ready-to-use APIs.


The Role of AI in Low- and No-Code Development

While AI is invaluable for generating code, it's also useful in your low- and no-code applications. Many low- and no-code platforms allow you to build and deploy AI-enabled applications. They abstract away the complexity of adding capabilities like natural language processing, computer vision, and AI APIs from your app. Users expect applications to offer features like voice prompts, chatbots, and image recognition. Developing these capabilities "from scratch" takes time, even for experienced developers, so many platforms offer modules that make it easy to add them with little or no code. For example, Microsoft has low-code tools for building Power Virtual Agents (now part of its Copilot Studio) on Azure. These agents can plug into a wide variety of skills backed by Azure services and drive them using a chat interface. Low- and no-code platforms like Amazon SageMaker and Google's Teachable Machine manage tasks like preparing data, training custom machine learning (ML) models, and deploying AI applications. 


The 5 Worst Anti-Patterns in API Management

As a modern Head of Platform Engineering, you strongly believe in Infrastructure as Code (IaC). Managing and provisioning your resources in declarative configuration files is a modern and great design pattern for reducing costs and risks. Naturally, you will make this a strong foundation while designing your infrastructure. During your API journey, you will be tempted to take some shortcuts because it can be quicker in the short term to configure a component directly in the API management UI than setting up a clean IaC process. Or it might be more accessible, at first, to change the production runtime configuration manually instead of deploying an updated configuration from a Git commit workflow. Of course, you can always fix it later, but deep inside, those kludges stay there forever. Or worse, your API management product needs to provide a consistent IaC user experience. Some components need to be configured in the UI. Some parts use YAML, others use XML, and you even have proprietary configuration formats. 


Ownership and Human Involvement in Interface Design

When an interface needs to be built between two applications with different owners, without any human involvement, we have the Application Integration scenario. Application Integration is similar to IPC in some respects; for example, the asynchronous broker-based choice I would make in IPC, I would also make for Application Integration for more or less the same reasons. However, in this case, there is another reason to avoid synchronous technologies: ownership and separation of responsibilities. When you have to integrate your application with another one, there are two main facts you need to consider: a) Your knowledge of the other application and how it works is usually low or even nonexistent, and b) Your control of how the other application behaves is again low or nonexistent. The most robust approach to application integration (again, a personal opinion!) is the approach shown in Figure 3. Each of the two applications to be integrated provides a public interface. The public interface should be a contract. This contract can be a B2B agreement between the two application owners.


Reports show ebbing faith in banks that ignore AI fraud threat

The ninth edition of its Global Fraud Report says businesses are worried about the rate at which digital fraud is evolving and how established fraud threats such as phishing may be amplified by generative AI. Forty-five percent of companies are worried about generative AI’s ability to create more sophisticated synthetic identities. Generative AI and machine learning are named as the leading trends in identity verification – both the engine for, and potential solution to, a veritable avalanche of fraud. IDology cites recent reports from the Association of Certified Fraud Examiners (ACFE), which say businesses worldwide lose an estimated 5 percent of their annual revenues to fraud. “Fraud is changing every year alongside growing customer expectations,” writes James Bruni, managing director of IDology, in the report’s introduction. “The ability to successfully balance fraud prevention with friction is essential for building customer loyalty and driving revenue.” “As generative AI fuels fraud and customer expectations grow, multi-layered digital identity verification is essential for successfully balancing fraud prevention with friction to drive loyalty and grow revenue.”


What IT Leaders Can Learn From Shadow IT

Despite its shady reputation, shadow IT is frequently more in tune with day-to-day business needs than many existing enterprise-deployed solutions, observes Jason Stockinger, a cyber leader at Royal Caribbean Group, where he's responsible for shoreside and shipboard cyber security. "When shadow IT surfaces, organization technology leaders should work with business leaders to ensure alignment with goals and deadlines," he advises via email. ... When assessing a shadow IT tool's potential value, it's crucial to evaluate how it might be successfully integrated into the official enterprise IT ecosystem. "This integration must prioritize the organization's ability to safely adopt and incorporate the tool without exposing itself to various risks, including those related to users, data, business, cyber, and legal compliance," Ramezanian says. "Balancing innovation with risk management is paramount for organizations to harness productivity opportunities while safeguarding their interests." IT leaders might also consider turning to their vendors for support. "Current software provider licensing may afford the opportunity to add similar functionality to official tools," Orr says.



Quote for the day:

"Ninety percent of leadership is the ability to communicate something people want." -- Dianne Feinstein

Daily Tech Digest - June 26, 2024

How Developers Can Head Off Open Source Licensing Problems

There are proactive steps developers can take as well. For instance, developers can opt for code that isn’t controlled by a single vendor. “The other side, beyond the licensing, is to look and to understand who’s behind the license, the governance, policy,” he said. Another option to provide some cushion of protection is to use a vendor that specializes in distributing a particular open source solution. A distro vendor can provide indemnification against exposure, he said. They also provide other benefits, such as support and certification to run on specific hardware set-ups. Developers can also look for open source solutions that are under a foundation, rather than a single company, he suggested, although he cautioned that even that isn’t a failsafe measure. “Even foundations are not bulletproof,” he said. “Foundations provide some oversight, some governance and some other means to reduce the risk. But if ultimately, down the path, it ends up again being backed up by a single vendor, then it’s an issue even under a foundation.”


Line of Thought: A Primer on State-Sponsored Cyberattacks

A cyberattack may be an attractive avenue for a state actor and/or its affiliates since it may give them the ability to disrupt an adversary while maintaining plausible deniability.15 It may also reduce the risk of a retaliatory military strike by the victim.16 That’s because actually determining who was behind a cyberattack is notoriously difficult: attacks can be shrouded behind impersonated computers or hijacked devices and it may take months before actually discovering that an attack has occurred.17 Some APTs leverage an approach called “living off the land” which enables them to disguise an attack as ordinary network or system activities.18 Living off the land enabled one APT actor to reportedly enter network systems in America’s critical infrastructure and conduct espionage—reportedly with an eye toward developing capabilities to disrupt communications in the event of a crisis.19 The attack occurred sometime in 2021, but, due to the stealthy nature of living off the land techniques, wasn’t identified until 2023.


Taking a closer look at AI’s supposed energy apocalypse

Determining precisely how much of that data center energy use is taken up specifically by generative AI is a difficult task, but Dutch researcher Alex de Vries found a clever way to get an estimate. In his study "The growing energy footprint of artificial intelligence," de Vries starts with estimates that Nvidia's specialized chips are responsible for about 95 percent of the market for generative AI calculations. He then uses Nvidia's projected production of 1.5 million AI servers in 2027—and the projected power usage for those servers—to estimate that the AI sector as a whole could use up anywhere from 85 to 134 TWh of power in just a few years. To be sure, that is an immense amount of power, representing about 0.5 percent of projected electricity demand for the entire world (and an even greater ratio in the local energy mix for some common data center locations). But measured against other common worldwide uses of electricity, it's not representative of a mind-boggling energy hog. A 2018 study estimated that PC gaming as a whole accounted for 75 TWh of electricity use per year, to pick just one common human activity that's on the same general energy scale


Stepping Into the Attacker’s Shoes: The Strategic Power of Red Teaming

Red Teaming service providers are spending years preparing their infrastructure to conduct Red Teaming exercises. It is not feasible to quickly build a customized infrastructure for a specific customer; this requires prior development. Tailoring the service to a particular client can take anywhere from one to four months. During this period, preliminary exploration takes place. Red Teams use this time to identify and construct a combination of infrastructure elements that will not raise alarms among SOC defenders. ... The focus has shifted towards building a more layered defense, driven by Covid restrictions, remote work and the transition to the cloud. As companies enhance their defensive measures, there is a growing need to conduct Red Teaming projects to evaluate the effectiveness of these new systems and solutions. The risk of increased malicious insider activity has made the hybrid model increasingly relevant for many Red Teaming providers. This approach is neither a complete White Box, where detailed infrastructure information is provided upfront, nor traditional Red Teaming.


Six NFR strategies to improve software performance and security

Based on their analysis and discussions with developers, the researchers identified six key points: Prioritization and planning: NFRs should be treated with as much priority as other requirements. They should be planned in advance and reviewed throughout a development project. Identification and discussion: NFRs should be identified and discussed early in the development process, ideally in the design phase. During the evolution of the software, these NFRs should be revisited if necessary. Use of technologies allied with testing: The adequacy of the NFR can be verified through technologies already approved by the market, where the NFRs associated with those projects satisfy the project's complexity. Benchmarks: Using benchmarks to simulate the behavior of a piece of code or algorithm under different conditions is recommended, since it allows developers to review and refactor code when it is not meeting the project-specified NFRs. Documentation of best practices: By keeping the NFRs well-documented, developers will have a starting point to address any NFR problem when they appear.


Exploring the IT Architecture Profession

In IT architecture, it takes many years to gain the knowledge and skills required to be a professional architect. In my opinion, at the core of our profession are our knowledge and skills in technology. This is what we bring to the table; it is our knowledge and expertise in both business and technology that make the IT architecture profession unique. In addition to business and technology skills, it is essential that the architect possesses soft skills such as leadership, politics, and people management. These are often undervalued. When communicating IT architecture and what an IT architect does, I notice that there are a number of recurring aspects: scope, discipline, domain, and role. ... Perhaps the direction for the profession is to focus on gaining consensus around how we describe scope, domain and discipline rather than worrying too much about titles. An organisation should be able to describe a role from these aspects and describe the required seniority. At the end of the day, this was a thought-provoking exercise and with regards to my original problem, the categorisation of architecture views, I found that scope was perhaps the simplest way to organise the book.


Why collaboration is vital for achieving ‘automation nirvana’

Beeson says that one of the main challenges of implementing automation is getting different teams to collaborate on creating automation content. He explains that engineers and developers often have their own preferred programming language or tools and can be reluctant to share content or learn something new. “A lack of collaboration prevents the ‘automation nirvana’ of removing humans from complex processes, dramatically reducing automation benefits,” he says. “Individuals tend to be reluctant to contribute if they don’t have confidence in the automation tool or platform. “Automation content developers want the automation language to be easy to learn, compatible with their technology choices and provide control to ensure the content they contribute is not misused or modified.” ... When it comes to the future of automation, Beeson has no shortage of thoughts and predictions for the sector, especially relating to the role of automation in defence. “Defence is not immune from the ‘move to cloud’ trend, so hybrid cloud automation is becoming ever more prevalent in the sector,” he says


Securing the digital frontier: Crucial role of cybersecurity in digital transformation advisory

Advisory services have the expertise to perform in-depth technical security assessments to identify and help prioritize vulnerabilities in an organization’s infrastructure. These assessments include the use of specialised tools and manual testing to do a comprehensive assessment. Systems are examined to validate if they are following security best practices and prescribed industry standards. ... Advisors help organisations develop threat models to identify potential attack vectors and assess associated risks. Several methodologies like STRIDE, Kill Chain and PASTA are used to systematically analyse threats and risks. ... An organisation’s security is only as good as its weakest link, and generally, the weakest link is an individual of the organisation. Advisory services undertake regular training to educate and inform employees on security best practices. They can also support with simulation training such as phishing simulations and develop comprehensive security awareness programs that cover topics like secure password practices, data handling, data privacy, and incident reporting.


Delving Into the Risks and Rewards of the Open-Source Ecosystem

While some risk is inevitable, enterprise teams need to have an understanding of that risk and use open-source software accordingly. “As a CISO, the biggest risk I see is for organizations not to be intentional about how they use open-source software,” says Hawkins. “It's extremely valuable to build on top of these great projects, but when we do, we need to make sure we understand our dependencies. Including the evaluation of the open-source components as well as the internally developed components is key to being able to accurately [understand] our security posture.” ... So, it isn’t feasible to ditch open-source software, and risk is part of the deal. For enterprises, that reality necessitates risk management. And that need only increases as does reliance on open-source software. “As we move towards cloud and these kind of highly dynamic environments, our dependency on open-source is going up even higher than it ever did in the past,” says Douglas. If enterprise leaders shift how they view open-source software, they may be able to better reap its rewards while mitigating its risks.


Rethinking physical security creates new partner opportunities

Research conducted by Genetec has showed a 275% increase in the number of end users wanting to take more physical security workloads to the cloud. Research also indicates that many organisations aren’t treating SaaS and cloud as an ‘all or nothing’ proposition. However, while a hybrid-cloud infrastructure provides flexibility, it also has implications in being the gateway to the physical security cloud journey. Organisations needs to ensure that there are tools in place that can protect data regardless of their location. ... Organisations that aren’t able to keep up with the upgrade cycle often become subject to the consumption gap. This is where the end user can see the platform evolving with new features and functionality, but are unable to take advantage of all of it. The bigger the consumption gap, the more likely it’s to be holding the organisation back from physical security best practices. SaaS promises to close that gap because it keeps organisations on the latest software version. Importantly, their solution is updated in a way that is pre-approved by the organisation and on a timeframe of their choosing.



Quote for the day:

"Without growth, organizations struggle to add talented people. Without talented people, organizations struggle to grow." -- Ray Attiyah

Daily Tech Digest - June 25, 2024

Six Strategies For Making Smarter Decisions

Broaden your options - Instead of Options A and B, what about C or even D? A technique I use in working with client organizations is to set up a “challenge statement” that inevitably reveals multiple possibilities to be decided upon. I’ll have small groups of four or five people take 10 minutes to list all the options without discussing or critiquing them during the exercise. Frame challenge statements thusly: “In what ways might we accomplish X?” ... Listen to your gut - Intuition is knowing something without knowing quite how we know it. All of us have it, but in a data-driven world, listening to it becomes harder. Before making an important choice, one executive I interviewed gathers information, weighs all the facts – then takes time to stop and listen to what his gut is telling him. “When a decision doesn’t feel good,” one executive commented, “It feels like a stomachache. And when a decision feels right, it’s like I’ve eaten a great meal. If I don’t feel good in my gut about a decision, I don’t care if the numbers say we’re going to make a billion dollars, I won’t go ahead with it. That’s how important intuition is to me.”


Overcoming Stagnation And Implementing Change To Facilitate Business Growth: The How-To

Overcoming stagnation is about understanding that doing the same thing over and over again will give you the same results over and over again. But bringing about change in the former will naturally impact the latter. The three main objectives in any transformation initiative that aims to set up a strong foundation to scale or grow a business are: become financially lean with the ability to scale either up or down as per market demands, become internally efficient, and to run its day-to-day operations independent of its founder or leader. ... Ideally, it would be wise to aim to maintain 60-70% of the total operating cost as fixed costs, while keeping the remaining as variable costs, allowing for flexibility to adjust the costing structure based on business needs, while maintaining profitability throughout the transition- and beyond. When an efficient business achieves this level of financial optimization and is managed by a competent team, then the founder or leader will have the time to work on the business, concentrating on long-term growth strategic issues, instead of the day-to-day of the enterprise.


Build your resilience toolkit: 3 actionable strategies for HR leaders

Go beyond current job descriptions to identify talent or skill gaps. Focus on future-focused talent acquisition strategies and design upskilling and reskilling programs. Aim to close the skills gap and attract talent with transferable skill sets and a growth mindset. This approach keeps your workforce adaptable and prepared for future challenges. ... Adapting work models and fostering continuous learning cultures are essential. HR leaders can implement flexible work arrangements, such as remote or hybrid models. Encouraging experimentation and risk-taking within teams, and integrating continuous learning opportunities into performance management systems, are key actionable tips. Agile approaches help HR leaders adapt quickly to shifting business requirements. Collaborative work environments are critical in an agile HR strategy. ... Open communication and safe spaces are essential for a supportive culture. HR leaders can encourage employees to voice concerns by creating channels for open dialogue. This approach ensures employees feel heard and valued, contributing to a more inclusive workplace.


The 4 skills you need to engineer a career in automation

Automation engineers are often required to work cohesively with multidisciplinary teams and for that reason, it can be useful to have a solid grasp of workplace soft skills, in addition to compulsory hard skills. Automation engineers are expected to take complex, highly nuanced information and relay it back to not only their peers, but to people who do not have a strong technical background. This requires expert communication skills, as well as an ability to collaborate. ... If you are considering a career as an automation engineer, then a foundational understanding of programming languages and how they are applied is compulsory, as you will frequently need to write and maintain the code that keeps operation systems running. The choice of programming language greatly impacts the success of automation in the workplace, as it will provide and improve versatility, scalability and integration. ... As AI advances, global workplaces will have to evolve in tandem, meaning automation engineers will have to have a standard level of AI and machine learning skills to stay competitive. 


Navigating the Evolving World of Cybersecurity Regulations in Financial Services

Accountability for cybersecurity measures is a key element of the NYDFS regulations. CISOs now must provide a report updating their governing body or board of directors on the company’s cybersecurity posture and plans to fix any security gaps, Burke says. Maintaining accountability entails communicating with the board about cybersecurity risks, explains Kirk J. Nahra, partner and co-chair of the cybersecurity and privacy practice at law firm WilmerHale. “The board needs to understand that its job is to evaluate major issues for a company, and a ransomware attack that shuts down the whole business is a major risk,” Nahra says. “The boards have to become more sophisticated about information security.” ... The NYDFS calls for organizations to have cybersecurity policies that are reviewed and approved annually. Previously, regulations concentrated more on processes and best practices, Nahra says. Now, they are becoming more prescriptive, but multiple regulators are inconsistent, and their standards may conflict at times.


How Banks Can Get Past the AI Hype and Deliver Real Results

If the bank’s backend systems aren’t automated, all the rapidly responding chatbot has done is make a promise that a human will have to solve when they finally get to that point in the inbox, Bandyopadhyay says. When they ultimately get back to the customer, that efficient chatbot doesn’t actually look so efficient. Bandyopadhyay explains that this is merely meant as an illustration of the bank has to be ready for front ends and back ends of customer-facing systems to be in synch. The potential result is alienating customers with significant problems. ... The real power of GenAI is its ability to digest and deploy unstructured data. But Bandyopadhyay points out that most banks use legacy systems that can’t capture any of that information. “It’s not data that you put in rows and columns on a spreadsheet,” says Bandyopadhyay. “It’s what language we write and that we speak.” To truly implement GenAI in the long run, he continues, banks will have to lick the longstanding legacy systems problem. Until then, most of their databases aren’t talking GenAI’s language.


Singapore lays the groundwork for smart data center growth

In a move that stunned industry observers, Singapore announced on May 30 that it would release more data center capacity to the tune of 300MW, a substantial figure and a new policy direction for the nation-state. ... The 300MW will come as part of a newly unveiled Green Data Centre (DC) Roadmap drawn up by IMDA, so it does have conditions attached. According to the statutory board, the roadmap was developed to chart a “sustainable pathway” for the continued growth of data centers in Singapore to support the nation’s digital economy. Per the roadmap, Singapore hopes to work with the industry to pioneer solutions for more resource-efficient data centers. One way to view it is as a carrot that it can use to spur data center operators to innovate and accelerate data center efficiency on both hardware and software levels. It is all well and good to talk about allocating hundreds of megawatts of capacity for data centers. But with electrical grids around the world heaving from electrification and sharply rising power demands, is Singapore in a position to deliver this capacity to data center operators today?


Information Blocking of Patient Records Could Cost Providers

Information blocking is defined as a practice that is likely to interfere with the access, exchange or use of electronic health information, except as required by law or specified in one of nine information blocking exceptions. ... Under the security exception, it is not considered information blocking for an actor to interfere with the access, exchange or use of EHI to protect the security of that information, provided certain conditions are met. For example, during a security incident, such as a ransomware attack, a healthcare provider might be unable to provide access or exchange to certain EHI for a time, and that would not constitute information blocking. ... So, as of now, if a healthcare provider does not participate in any of the CMS payment programs that are currently subject to the disincentives, they do not face any potential penalties for information blocking. But that could change moving forward. HHS officials during a briefing with media on Monday said HHS is considering adding other disincentives for healthcare providers that do not participate in such CMS programs. 


How is AI transforming the insurtech sector?

The use of AI also brings risks and ethical considerations for insurers and insurtech firms. “With all AI, you need to understand where the AI models are from and where the data is being trained from and, importantly, whether there is an in-built bias,” says Kevin Gaut, chief technology officer at insurtech INSTANDA. “Proper due diligence on the data is the key, even with your own internal data.” It’s essential, too, that organisations can explain any decisions that are taken, warns Muylle, and that there is at least some human oversight. “A notable issue is the black-box nature of some AI algorithms that produce results without explanation,” he warns. “To address this, it’s essential to involve humans in the decision-making loop, establish clear AI principles and involve an AI review board or third party. Companies can avoid pitfalls by being transparent with their AI use and co-operating when questioned.” AI applications themselves also raise the potential for organisations to get caught out in cyber-attacks. “Perpetrators can use generative AI to produce highly believable yet fraudulent insurance claims,” points out Brugger. 


Evaluating crisis experience in CISO hiring: What to look for and look out for

So long as a candidate’s track record is verifiable and clear in its contribution to intrusion events, direct experience of a crisis may actually be more indicative of future success than more traditional metrics. By contrast, be wary of the “onlookers,” those individuals with qualifications but whose learned experience comes from arm’s length involvement in a crisis. While such persons may contribute positively to their organization, the role of the crisis in their hiring should be de-emphasized relative to more conventional metrics of future performance. ... The emerging consensus of research is that being present for multiple stages of the response lifecycle — being impacted by an attack’s disruptions or helping with preparedness for a future response — is far better experience than simply witnessing an attack. Those who experience the initial effects of a compromise or other attack and then go on to orient, analyze, and engage in mitigation activities are the ones for whom over-generalization and perverse informational reactions appear less likely.



Quote for the day:

"The most powerful leadership tool you have is your own personal example." -- John Wooden