Daily Tech Digest - November 28, 2023

How a digital design firm navigated its SOC 2 audit

One of the more intense aspects of the audit was the testing of our incident response plan. We had to provide records of past incidents, how they were handled, and the lessons learned. Moreover, the auditors conducted tabletop exercises to assess our preparedness for potential future security events. After weeks of evaluation, the auditors presented their findings. We excelled in some areas, such as in our encryption of sensitive data and our robust user authentication systems. However, they also identified areas for improvement, like the need for more granular access controls and enhanced monitoring of system configurations. Post-audit, we were given a roadmap of sorts--a list of recommendations to address the identified deficiencies. This phase was dedicated to remediation, where we worked diligently to implement the auditors’ suggestions and improve our systems. Reflecting on the transformative impact of SOC 2 certification, L+R has discerned a profound shift in the dynamics of client engagement and internal processes. SOC 2 certification transcends the realm of compliance, fostering enriched dialogues, bolstering trust, and catalyzing decision-making at the executive level.


Is anything useful happening in network management?

The first of these is a management take on something that's already becoming visible in a broader way: absorbing the network into something else. Companies have said for years that the data center network, the LAN, is really driven by data center hardware/software planning and not by network planning. They're now finding that a broader use of hybrid cloud, where the cloud becomes the front-end technology for application access, is pulling the WAN inside the cloud. The network, then, is becoming less visible, and thus explicit network management is becoming less important. ... The second development gaining attention is being proposed by a number of vendors, the largest being Nokia. It envisions using "digital twin" technology, something most commonly associated with IoT and industrial metaverse applications, to construct a software model of the network based on digital twins of network devices. With this approach, the network becomes in effect an industrial system, and potentially could then be monitored and controlled by tools designed for industrial IoT and industrial metaverse deployments. 


The Basis for Business and Solution Architecture

The Business Architect, just like the Solution Architect, is a business technology strategist. The delivery of technology driven business value is core to their professional capability and career. So for that purpose they share a set of skills/competencies, language, professional goals, and experiences with each other. Any other method of architecture has been shown to fail. Without the shared capabilities and focus the team quickly begins to cannibalise its own value proposition to the enterprise and argue about baseline definitions, purpose and ownership. The level of team synergy and shared community is one of the most important first steps to a mature architecture practice. With that in place the Business and Solution Architects work very well together and in alignment with the EA from strategy through execution to measured value. Business Architects must focus on program level outcomes, those that are scoped at the business capability and/or department or region level. These levels are where real business goals and measurements occur and stand closest to the customer while retaining executive authority.


Is the Future of Data Centers Under the Sea?

One of the most appealing aspects of underwater data centers is their proximity to large population centers. Around half of the world’s population lives within 125 miles of a coastal area. Situating data centers near coastal population centers would allow for lower latency and more efficient handling of data. This would increase speeds for various digital services. Perhaps counterintuitively, the servers themselves might also benefit from being dropped in the drink. The inert gases and liquids used to fill underwater data centers are less corrosive than ambient air, leading to longer lifespans for the equipment. The servers are also protected from possible human damage incurred by everyday movement -- people banging into them, dropping them, or accidentally unplugging them. Placing a data center pod or retrieving it for maintenance is fairly simple, according to Subsea Cloud’s Williams. “Let's say the water is 100 meters deep. It's just an hour an hour job. If it’s 3,000 meters deep, it will probably take five or six hours to get the pod down.”


What you don’t know about data management could kill your business

Contributing to the general lack of data about data is complexity. There are many places in the enterprise where data spend happens. Individual business units buy data from third parties, for example. Taking enterprise-wide inventory of all the data feeds being purchased and getting an accurate picture of how all that purchased data is being put to use would be a good first step. The reality is that a significant portion of the data sloshing about modern enterprises is replicated in multiple locations, poorly classified, idiosyncratically defined, locked in closed platforms, and trapped in local business processes. Data needs to be made more liquid in the way of an asset portfolio — that is, transformed to ease data asset reuse and recombination. ... Traditionally business schools have avoided data as a topic, pumping out business leaders who erroneously feel that data is someone else’s job. I recall the mean-spirited dig at early career Harvard Business School alums expecting their assistants to bring in the day’s work arrayed as a case study — that is, a crisp 20-page synopsis of all the relevant issues.


Stop panic buying your security products and start prioritizing

Look inward and optimize. Companies need to understand what inside their networks and data is most attractive and most vulnerable to attackers. Get visibility into what you have, calculate the value of your tools, and use the information to move forward. Understanding risk by gaining full visibility into what you already have can allow companies to communicate better with investors and the public in the case of an attack or breach. For example, they will be able to give clear information about the impact (or lack of impact) on the business when an attack occurs and lay out clear steps for remediation, not having to guess the next best course of action. ... It is important to remember that the goal is not to buy more tools to chase the growing number of vulnerabilities that experts find every day, but to protect the assets that are most relevant to overall vital business operations and limit the fallout of inevitable cyber incidents. By attaching a dollar value to the cyber risks the organization is up against, you will be in a much better position to discuss your security plan and budgetary needs.


US, UK Cyber Agencies Spearhead Global AI Security Guidance

The guidance, the agencies say, is good for all AI developers but is particularly aimed at AI providers who use models hosted by a third party or who use APIs to connect to a model. Risks include adversarial machine learning threats stemming from vulnerabilities in third-party software and hardware applications. Hackers can exploit those flaws to alter model classification and regression performance and to corrupt training data and carry out prompt injection or data poisoning attacks that influence the AI model's decisions. Hackers can also target vulnerable systems to allow unauthorized actions as well as extract sensitive information. The guidance describes cybersecurity as a "necessary precondition" for AI safety and resilience. CISA, in particular, has been on a protracted campaign to evangelize the benefits of secure by design while also warning tech companies that the era of releasing products to the public containing security flaws must come to an end (see: US CISA Urges Security by Design for AI). The guidelines represent a "strong step" in providing universal standards and best practices for international AI security operations and maintenance, according to Tom Guarente, vice president of external and government affairs for the security firm Armis. "But the devil is in the details."


Data De-Identification: Balancing Privacy, Efficacy & Cybersecurity

There are two primary laws guiding online privacy: the General Data Protection Regulation (GDPR) and the California Privacy Rights Act (CPRA), although many countries and states have started to write their own. Among the various safeguard measures, data de-identification is a prime one. Both define data de-identification as the process of making PII anonymized in a way that any piece of secondary information, when associated with the personal data, cannot identify the individual. The industry unanimously agrees on some entities as personal data including a name, address, email address, and phone number. Others, such as an IP address (and versions of it) are based on interpretation. These laws neither explicitly list the attributes that are personal nor do they mention how and when to anonymize, beyond sharing a few best practices. ... However, full anonymization of personal data and the data linked to it is useless to businesses in this ever-digital world. Every new technological breakthrough demands massive input of data sets — both personal and aggregated. 


NCSC publishes landmark guidelines on AI cyber security

The set of guidelines has been designed to help the developers of any system that incorporates AI make informed decisions about cyber security during the development process – whether it is being built from scratch or as an add-on to an existing tool or service provided by another entity. The NCSC believes security to be an “essential pre-condition of AI system safety” and integral to the development process from the outset and throughout. In a similar way to how the secure-by-design principles alluded to by CISA’s Easterly are increasingly being applied to software development, the cognitive leap to applying the same guidance to the world of AI should not be too difficult to make. The guidelines, which can be accessed in full via the NCSC website, break down into four main tracks – Secure Design, Secure Development, Secure Deployment and Secure Operation and Maintenance, and include suggested behaviours to help improve security. These include taking ownership of security outcomes for customers and users, embracing “radical transparency”, and baking secure-by-design practices into organisational structures and leadership hierarchies.


AI partly to blame for spike in data center costs

The research firm attributes the year-over-year jump in absorption to the impact of AI requirements, as well as the growth of traditional cloud computing needs across the region. However, the industry is facing challenges, not the least of which is securing enough power. "Securing power continues to be a challenge for developers, pushing some to the outskirts of primary markets, as well as fueling growth in secondary and tertiary markets. This has led to an uptick in land-banking by companies hoping to secure space and power for future growth," DatacenterHawk said in its report. ... Cloud providers continue to consume most of the capacity in North America. AI and machine learning also continue to drive activity across US and Canadian markets, though the full impact of these rapidly growing industries has yet to be seen, the report noted. Submarkets within major markets will continue to grow from hyperscale users and data center operators, DatacenterHawk predicts. Older, enterprise data centers will be targets for AI companies that need bridge power quickly, providing an environment that allows them to grow larger over time.



Quote for the day:

"Amateurs look for inspiration; the rest of us just get up and go to work." -- Chuck Close

Daily Tech Digest - November 27, 2023

From Risk to Resilience: Safeguarding BFSI Against Increasing Threats

As financial transactions increasingly migrate to digital platforms, safeguarding sensitive data and systems has become the linchpin for maintaining trust and stability in the industry. Customer trust forms the bedrock of any successful financial institution. With the advent of digital banking and the proliferation of online transactions, customers expect their financial data to be treated with the utmost confidentiality and security. A single breach can erode trust irreparably, leading to customer attrition and reputational damage. To uphold trust, BFSI organizations must adopt a proactive cybersecurity posture. This entails not only implementing robust security measures but also fostering a culture of cybersecurity awareness among employees and customers alike. ... Converged IAM represents a paradigm shift in cybersecurity strategy. It combines traditional IAM, which manages user identities and access to resources, with Identity Governance and Administration (IGA), which ensures compliance with internal policies and external regulations. This convergence empowers organizations to have a unified view of user identities and their associated access rights, thereby bolstering security measures.


Innovation in data centers: Navigating challenges and embracing sustainability

Navigating the challenge of finding solutions that meet all constraints is a constant endeavour in the data center industry. Daily operations involve continuous optimization efforts, where sustainability and cost-effectiveness are pivotal considerations. Contrary to common perception, sustainable solutions are not invariably more expensive; their cost-effectiveness depends on the thorough assessment of environmental implications. Consider the approach the industry has taken to battery technology optimization as an example. Traditionally, lead batteries have been a standard industry solution. However, exploring new technologies, such as lithium-ion batteries, introduces a diverse range of options. While these batteries may be more intricate and expensive in the production phase, a holistic lifecycle analysis reveals their extended service life and lower total cost of ownership. This emphasises the need to evaluate innovation not only in terms of initial costs but also in terms of environmental impact and the overall project lifecycle.


Rise of the cyber CPA: What it means for CISOs

The biggest value-add these new talents are likely to deliver is in helping CISOs sell security programs more effectively. "CISOs are not known to speak in [terms of] ROI effectively, at least not in the practical ROI issues lines of business executives care about. And after hearing these ineffective arguments for years, many CFOs are eventually not listening," Yigal Rechtman, managing partner of Rechtman Consulting, a New Jersey-based compliance and forensic accounting firm, tells CSO. Even if the new cyber accountants don't immediately deliver better ROI arguments, argues Phil Neray, the VP of cyber defense security at Gem Security, their financial approach and different mindsets might prove quite valuable. "Fighting our cyber adversaries requires having different approaches and different viewpoints and different worldviews," he tells CSO. "Therefore, having a diversity of perspectives on your security team is going to make your team stronger. And these cyber accountants might do just that."


Why it’s the perfect time to reflect on your software update policy

The foundation of a sound software update policy begins with thorough pre-work. This involves setting the groundwork for delivering successful updates, creating an inventory of devices, documenting baseline configurations, and understanding the applications that are critical to business operations. Organizations must establish baseline configurations and communicate the requisite standards to users. A comprehensive inventory of all devices used for work, including BYOD and unmanaged devices, is essential. This also encompasses documenting the end of support for devices being phased out, noting the critical business applications in use, and understanding which devices and users depend on them. Identifying devices that are no longer receiving security updates yet access critical applications should be a priority. Similarly, sufficient staff must be allocated to the help desks to cope with increased queries during update rollouts. Organizations should also prepare a diverse group of informed early adopters and testers from across the business spectrum to ensure that feedback is timely and representative. 
It’s easy to predict a rosy future but far harder to deliver it. Gates can gush that “agents will be able to help with virtually any activity and any area of life,” all within five years, but for anyone who has actually used things like Midjourney to edit images, the results tend to be really bad, and not merely in terms of quality. I tried to make Mario Bros. characters out of my peers at work and discovered that Caucasians fared better than Asians. ... “The key to understanding the real threat of prompt injection is to understand that AI models are deeply, incredibly gullible by design,” notes Simon Willison. Willison is one of the most expert and enthusiastic proponents of AI’s potential for software development (and general use), but he’s also unwilling to pull punches on where it needs to improve: “I don’t know how to build it securely! And these holes aren’t hypothetical, they’re a huge blocker on us shipping a lot of this stuff.” The problem is that the LLMs believe everything they read, as it were. By design, they ingest content and respond to prompts. They don’t know how to tell the difference between a good prompt and a bad one.


Scaling SRE Teams

Scaling may come naturally if you do the right things in the right order. First, you must identify what your current state is in terms of infrastructure. How well do you understand the systems? Determine existing SRE processes that need improvement. For the SRE processes that are necessary but are not employed yet, find the tools and the metrics necessary to start. Collaborate with the appropriate stakeholders, use feedback, iterate, and improve. ... SLOs set clear, achievable goals for the team and provide a measurable way to assess the reliability of a service. By defining specific targets for uptime, latency, or error rates, SRE teams can objectively evaluate whether the system is meeting the desired standards of performance. Using specific targets, a team can prioritize their efforts and focus on areas that need improvement, thus fostering a culture of accountability and continuous improvement. Error budgets provide a mechanism for managing risk and making trade-offs between reliability and innovation. 


IT staff augmentation pros and cons

IT staff augmentation does have aspects that organisations will need to consider before they decide to adopt it. James suggests there can be an impact on workplace culture, which is often overlooked by organisations. “Certain contractors, particularly those who frequently work for the same company or who are contracted on a longer-term basis, will ingratiate themselves into the fabric and culture of a team, while others simply want to get in, get their work done and get out,” says James. “This could affect internal staff, employee relations and damage company culture.” Temporary team members could also feel left out and be less motivated than in-house employees, suggests Martin Hartley, group CCO of emagine Consulting, which can cause issues if they then decide to leave. “If you put the time into bringing someone into your team and they don’t work out, businesses are forced to repeat the process and go back out to the market to find a person with the right skills to replace that role, which takes time,” he warns. There can also be legal issues to think about.


10 things keeping IT leaders up at night

Many CIOs are troubled by a similar, related issue — a lack of full knowledge of and visibility into what they have in their IT environments. “It’s not knowing what you don’t know,” says Laura Hemenway, president, founder, and principle of Paradigm Solutions, which supports large enterprise-wide transformations. Many IT departments lack strong documentation around their code, processes, and systems, says Hemenway, who also serves as a fractional CIO and is a leader with the Arizona SIM chapter. Additionally, they don’t know all the places where their organization’s data lives, who touches it, and why. “CIOs went through so much so quickly in the past few years, that there is no transformation project that’s not full of data unknowns, process gaps, broken interfaces, or expired programs,” she says, calling them “all ticking time bombs.” “And unless CIOs take the time to create a solid foundation, this is going to be pulling at them, rolling around in the back of their head,” she says.


How To Prepare Your Organization for AI’s Impact on Data Processes and Products

In many ways, the advanced data literacy trend is connected to the growth of AI technologies. However, beyond the need for users to understand the breadth and power of AI, advanced data literacy requires users know the related terminologies, KPIs, and metrics connected with AI. Data literacy is not just about how data is consumed. It's about how it is classified within your data ecosystem. You should focus on the literacy needs of each user and train them to understand the parts of your data ecosystem they will need to access to contribute to AI product development. You must implement a solid, comprehensive framework that informs when and how you roll out data literacy programs in the various departments and teams in your organization. ... You will also need a compliance strategy that incorporates all the latest requirements and includes processes for implementation. The best way to ensure this is through a dedicated data governance tool. The tool should be configured to ensure that only verified users can access specific data assets and that data is only made available for usage according to compliance regulations. 


Unlocking Business Growth: A Strategiv Approach To Tackling Technical Debt

Tech debt is like a time bomb ticking within an organisation. While short-term solutions may work at first, they often crumble over time, causing costly rework and inefficiencies. A series of trade-offs and technical workarounds begin to compromise the long-term health of an organisation’s technological infrastructure. Although it is distinct from obsolescence or depreciation, its consequences are far-reaching: loss of systemic intelligence, compromised operational advantage and ineffective use of capital. In the simplest terms, it causes organisations to pay 20-40% to manage this inefficiency; it also consumes as much as 30% of the IT team’s talent. ... The first step is to reframe tech debt as a component of modernisation. This pivot refocuses the organisations on a clear and shared vision for modernisation efforts. It is an opportunity for candid executive conversations to assess the existing tech debt and strategies for the future. Step two gets to the business benefits of modernisation. These extend beyond the IT department, so the effort to support and manage modernisation does as well. 



Quote for the day:

When the sum of the whole is greater than the sum of the parts that's the power of teamwork -- Gorden Tredgold

Daily Tech Digest - November 26, 2023

European Commission Failing to Tackle Spyware, Lawmakers Say

As that deadline looms, lawmakers accused the European Commission of failing to act. On Thursday, they passed a resolution that attempts to force the European Commission to present the legislative changes recommended in May by the PEGA Committee. At a plenary session in Strasbourg, EU lawmakers said that the European Commission's inaction had facilitated an uptick in recent spyware cases. Such cases have included the alleged targeting of exiled Russian journalist Galina Timchenko using Pegasus when she was based in Germany, as well as the Greek government's attempt to thwart investigations into spyware abuse by its ministers. In contrast to the EU approach, lawmakers highlighted the U.S. government's blacklisting in July of European spyware firms Intellexa and Cytrox and the Biden administration's citing of the companies' risk to U.S. national security and foreign policy. Speaking at the Thursday plenary, EU Justice Commissioner Didier Reynders condemned using spyware to illegally intercept personal communications, adding that member states cannot use "national security" as a legal basis to circumvent existing laws and indiscriminately target their citizens.


Mastering the art of differentiation: Vital competencies for thriving in the age of artificial intelligence

With AI designed to make decisions using algorithms grounded in data and patterns, these algorithms are only as dependable as the data they are trained on and can be influenced by the assumptions and biases of their creators. Consequently, it is imperative to employ critical thinking skills to assess AI decisions and guarantee that they align with our values and objectives. Moreover, critical thinking is essential for resolving complex issues that may exceed AI’s capabilities. Developing critical thinking skills involves cultivating the ability to analyze, evaluate, and synthesize information to make informed decisions. ... In this rapidly evolving modern landscape, heavily influenced by digital technologies, cultivating a high LQ is indispensable for the long-term success and sustainability of both employees and organizations. In the business world, change is constant, making continuous learning and development essential at every level of the organization to ensure we consistently make the right decisions. High LQ empowers employees to foster innovation and creativity, cultivate resilience, and position themselves more effectively to future-proof their careers. 


Digital advocacy group criticizes current scope of the EU AI Act

The group’s core argument is that the AI Act now goes beyond its original intended scope, and should instead remain focused on high-risk use cases, rather than being directed at specific technologies. Digital Europe also warned that the financial burden the Act could place on companies wanting to bring AI-enabled products to market could make operating out of the EU unstainable for smaller organizations. “For Europe to become a global digital powerhouse, we need companies that can lead on AI innovation also using foundation models and GPAI (general-purpose AI),” the statement read. “As European digital industry representatives, we see a huge opportunity in foundation models, and new innovative players emerging in this space, many of them born here in Europe. Let’s not regulate them out of existence before they get a chance to scale, or force them to leave.” The letter was signed by 32 members of Digital Europe and outlined four recommendations that signatories believe would allow the Act to strike the necessary balance between regulation and innovation.


HR Leaders unleashing retention success through employee well-being

“The pandemic brought the discourse on mental health to the forefront and normalised talk about stress and mental health in all forums. Accordingly, a formalised framework to address the mental health of employees has been put in place. Wellness webinars on these topics are delivered through tie-ups with service providers and in-house subject matter experts. Webinars on mental health are regularly organised with an aim to destigmatise mental health through increasing awareness on topics such as mental health awareness, digital & screen detox and, stress management, etc. We continuously work on instituting policies that are customised as per the individual and life-stage needs of the employees. An employee assistance program, in tie-up with a service provider, is in place to facilitate mental health conversations with qualified professionals. In addition, the employees are nudged to incorporate habits that help take care of their mental well-being as an unconscious part of their lives. Initiatives such as the 'Mental Health Bingo’ card and ‘I De-stress myself by __’ campaigns are launched. 


How generative AI changes the data journey

We see generative AI used in the observability space throughout many industries, especially regarding compliance. Let’s look at healthcare, an industry where you must comply with HIPAA. You are dealing with sensitive information, generating tons of data from multiple servers, and you must annotate the data with compliance tags. An IT team might see a tag that says, “X is impacting 10.5.34 from GDPR…” The IT team may not even know what 10.5.34 means. This is a knowledge gap—something that can very quickly be fulfilled by having generative AI right there to quickly tell you, “X event happened, and the GDPR compliance that you’re trying to meet by detecting this event is Y…” Now, the previously unknown data has turned into something that is human readable. Another use case is transportation. Imagine you’re running an application that’s gathering information about flights coming into an airport. A machine-generated view of that will include flight codes and airport codes. Now let’s say you want to understand what a flight code means or what an airport code means. Traditionally, you would use a search engine to inquire about specific flight or airport codes. 


Banks May Be Ready for Digital Innovation: Many of the Staff Aren’t

A major roadblock to training workers is that many don’t actually bank with their employer. This makes training critical, especially for frontline staff members, says John Findlay, chief executive and founder of digital learning company LemonadeLXP, based in Ontario, Canada. “If their staff doesn’t bank with them, they don’t use the technologies on offer and it’s pretty difficult for them to promote them to customers,” he says. It’s also difficult for them to answer customer questions. Brian McNutt, U.S. vice president of product management at Dutch engagement platform Backbase, says banks should incentivize their staff to actually use their services as much as possible. One approach is to offer special rates or deals to employees, he says. “I think that really the most important thing is that they are customers themselves. There’s really no replacement for that. For somebody to really be able to empathize or understand customers, they have to experience the products themselves.”


The Future of Software Engineering: Transformation With Generative AI

The application of Generative AI in software engineering is not just a technical enhancement but a fundamental change in how software is conceptualized, developed, and maintained. This section delves into the key themes that underline this transformative integration, elucidating the diverse ways in which Generative AI is reshaping the field. Generative AI is revolutionizing the way code is written and maintained. AI models can now understand programming queries in natural language and translate them into efficient code, significantly reducing the time and effort required from human developers. This has several implications:Enhanced productivity: Developers can focus on complex problem-solving rather than spending time on routine coding tasks. Learning and development: AI models can suggest best coding practices and offer real-time guidance, acting as a learning tool for novice programmers. Code quality improvement: With AI's ability to analyze vast codebases, it can recommend optimizations and improvements, leading to higher quality and more maintainable code.


Reports: China’s Alibaba Shuts Down Quantum Lab

DoNews reported this week that Alibaba’s DAMO Academy –Academy for Discovery, Adventure, Momentum and Outlook — has closed down its quantum laboratory due to budget and profitability reasons. The budget ax claimed more than 30 people — possible among China’s brightest quantum researchers — lost their positions, according to the news outlet’s internal sources. For further claims of proof, DoNews reports that the official website of DAMO Academy has also removed the introduction page of the quantum laboratory. According to the story, translated into English: “Insiders claimed that Alibaba’s DAMO Academy Quantum Laboratory had undergone significant layoffs, but it was not clear at that time whether the entire quantum computing team had been disbanded.” Media further suggest that many of the DAMO Academy quantum team members who were laid off have begun to send their resumes to other companies. According to The Quantum Insider’s China’s Quantum Computing Market brief, Alibaba is a diverse tech conglomerate that has been active in quantum since 2015. The company’s Quantum Lab Academy teaching employees and students about the prospects of quantum computing.


It’s time the industry opts for collaborative manufacturing

The transition from an analogue factory to a digital one underscores the necessity of a coherent and efficient digital infrastructure. This transformation extends beyond the primary tasks of manufacturing, adding efficiency at every stage, including the cutting room. Investments in IoT-enabled machinery, though costly, can lead to significant improvements in output and efficiency. ... The technology underlines the importance of integrated planning software, which aids in production planning, order flow management and the efficient consumption of raw materials.” As technology continues to evolve and digitisation gains ground, an important question emerges while making the roadmap: What are the social implications of this technological revolution? In a city like Bengaluru and its surrounding manufacturing hubs, more than 3.5 million women toil in the garment industry, forming the majority of the workforce. Their livelihoods hinge on operating sewing machines, a vocation they might continue for the next two decades. 


The Digital Revolution in Banking: Exploring the Future of Finance

As banks continue to close their physical branches, it becomes crucial to balance the convenience of digital banking and the personalized service that customers crave. While online banking has become increasingly popular, some still prefer the in-person experience of visiting their local branch and interacting with staff. This is especially important when it comes to welcoming new customers. To address this, emerging technologies, such as augmented reality (AR) and virtual reality (VR), may offer a solution to bridge the gap between digital convenience and personalized service. Imagine you are a banking executive looking for ways to improve your customer experience. You know that digital banking is the future, but you also understand that some customers still crave the personalized service of visiting a physical branch. This is where augmented reality (AR) and virtual reality (VR) come in. By incorporating AR into your mobile app, you can enhance the interface and provide customers with more information in an immersive way. 



Quote for the day:

"Success is the sum of small efforts, repeated day-in and day-out." -- Robert Collier

Daily Tech Digest - November 25, 2023

Building a Successful Data Quality Program

Assessing Data Quality often includes establishing a standard of acceptable Data Quality, using data profiling and analysis techniques, and using statistical methods to identify and correct any Data Quality issues. The key features (often called “dimensions”) that should be examined and measured are: Completeness:- Data should not be missing or have incomplete values. Uniqueness:- Locate and eliminate copies to ensure the information in the organization’s data files is free of duplication. Validity:- This refers to how useful the data is, and how well the data conforms to the organization’s standards. Timeliness:- Old information that is often no longer true or accurate needs to be removed. Data can be measured using its relevance and freshness. Out-of-date data should be eliminated, so as not to cause confusion. Accuracy:- This is the precision of data, and how accurately it represents the real-world information. Consistency:- When data is copied, the information should be consistent and accurate. The need for a single source of accurate in-house data provides a good argument for the use of master data and its best practices.


Building brand trust in a new era of data privacy

Emily emphasized the importance of anonymizing data to utilize it in aggregate without compromising individual privacy, a task that requires close collaboration between technical and marketing departments. Anita introduced the intriguing concept of a Chief Trust Officer, a role highlighted by Deloitte, which spans data, business, and marketing, safeguarding all aspects of compliance and privacy. The idea of having such a partner resonated with her, underlining the multifaceted nature of trust in business operations. Jake echoed the sentiment, stressing the need for understanding the types of data at hand and leveraging them without violating regulations - a balance that is critical yet challenging to achieve. These insights from the panelists underscore a common theme: building brand trust in the digital age is a multifaceted challenge that requires a blend of transparency, consistency, and compliance. As we continue to delve into this topic, it's clear that the role of data privacy is not just a technical issue but a cornerstone of the customer-brand relationship.


How Does Technical Debt Affect QA Testers

How many times have your testers been caught off guard at the last minute when the delivery manager abruptly appeared and said, “Guys, we need to launch our product in a week, and we are very sorry for not communicating this sooner? Please complete all test tasks ASAP so that we can begin the demo.” Simply put, any missing tests or “fix it later” attitude can result in a tech debt problem. Lack of test coverage, excessive user stories, short sprints, and other forms of “cutting corners” due to time constraints all contribute significantly to the building of technical debt in QA practice. When the complexity of the testing mesh began to grow with each new sprint, a US-based online retailer with a strong presence across various websites and mobile apps found itself in a real-world “technical debt” dilemma. ... Most QA managers mistakenly believe that tech debt is a legitimate result of putting all of your work on the current sprint alone, which leads to completing test coverage manually and completely ignoring automation. According to agile principles, we should see the tech debt problem as an inability to maintain and meet QA benchmarks.


How digital twins will enable the next generation of precision agriculture

Digital twins are digital representations of physical objects, people or processes. They aid decision-making through high-fidelity simulations of the twinned physical system in real time and are often equipped with autonomous control capabilities. In precision agriculture, digital twins are typically used for monitoring and controlling environmental conditions to stimulate crop growth at an optimal and sustainable rate. Digital twins provide a live dashboard to observe the environmental conditions in the growing area, and with varying autonomy, digital twins can control the environment directly. ... Agriculture is among the lowest-digitalized sectors, and digital maturity is an absolute prerequisite to adopting digital twins. As a consequence, costs related to digital maturity often overshadow technical costs in smart agriculture. A company undergoing the early stages of digitalization will have to think about choosing a cloud provider, establishing a data strategy and acquiring an array of software licences, to name just a few critical challenges.


What are Software Design Patterns?

Software design patterns are an essential aspect of software development that helps developers and engineers create reusable and scalable code. These patterns provide solutions to commonly occurring problems in software design, enabling developers to solve these problems efficiently and effectively. In essence, a software design pattern is a general solution to a recurring problem in software design that has been proven to be effective. It's like a blueprint for a specific type of problem that developers can use to create software systems that are reliable, maintainable, and scalable. Software design patterns have been around for a long time and are widely used in the software development industry. They are considered to be a best practice in software design because they provide a standardized approach to solving common problems, making it easier for developers to communicate and collaborate with one another. In this blog, we will explore what software design patterns are, the different types of software design patterns, and the benefits of using them in software development. 


Examples of The Observer Pattern in C# – How to Simplify Event Management

The observer pattern is an essential software design pattern used in event-driven programming and user interface development. It is composed of three primary elements: the subject, observer, and concrete observers. The subject class is responsible for keeping track of the observer objects and notifying them of changes in the subject’s state. On the other hand, the observer is the object that wishes to be notified when the state of the subject changes. Finally, the concrete observer is an implementation of the observer interface. One of the observer pattern’s significant advantages is its capability to facilitate efficient event management in software development. By leveraging this ability, developers can trigger related events without the need for tightly coupling the pieces of code leading to the events. The observer pattern also ensures that the code continues to be free from changes that would cause a ripple effect or the chain reaction of changes. The observer pattern’s primary components are the Subject, Observer, and Concrete Observer. The subject defines the interface for attaching and detaching observers from the subject object. 


Cloud Computing: A Comprehensive Guide to Trends and Strategies

As a company moves to the cloud, they reduce the number of servers and other hardware their IT department has to maintain. Cloud computing efficiently uses today’s powerful processors, fast networks, and massive amounts of storage. Cloud virtual machines allow businesses to run multiple servers on one physical machine. Containers take that concept a step further. Containers are a lightweight form of virtualization that packages applications and their dependencies in a portable manner. This means that if, for instance, a company wants to run a web server, they no longer have to devote physical or virtual machines to host the server software. A container with only the needed bits runs in the cloud, appearing to the outside world as if it were its dedicated machine. Many containers can run in the same cloud instance for maximum efficiency. This approach is sometimes called serverless computing or Function as a Service (FaaS). The application-level isolation inherent in serverless computing restricts the attack surface that attackers can exploit.


Judges Urged To Stay Abreast Of Electronic Tools

The Cyber Security Authority (CSA), with funding support from the European Commission Technical Assistance and Information Exchange (TAIEX) Instrument is undertaking a series of workshops across Ghana to enhance the capacity of the judiciary and prosecutors regarding cybercrime and electronic evidence as a decisive factor in contributing to the rule of law. Expressing excitement about the training, the Chief Justice said e-commerce, e-trade, e-contracts, and intellectual property rights, among others, were now being conducted virtually, and the expertise of judges in these new trends was a prerequisite for the efficient trial of cyber-related cases, particularly in the gathering of electronic data. “Judges must develop new working skills by staying abreast of the digital space. You must develop leadership skills in this arena if you want to remain relevant in the system,” she stressed. Albert Antwi-Boasiako, stated that the major regulatory activity being undertaken by the Authority to license cybersecurity service providers and accredit cybersecurity establishments and professionals was tailored to support the training of the judges.


Candy Alexander Explains Why Bandwidth and Security are Both in High Demand

It became painfully clear to everyone that the primary component for productivity depended on bandwidth. The increased bandwidth of networks has become the primary factor of success; whether you're a business just looking to ensure the productivity of your remote workers or provide a cloud service, throughput is everything. And with that, the world has expanded ubiquitous access and high availability of networks. In today's digital world, businesses of all sizes rely on data. That data is used to make decisions, operate efficiently, and serve customers. Data is essential for everything, from product to development, marketing, and customer support. However, with the rise of remote work and cloud computing, it has become more challenging to ensure that the data is always accessible and secure. The application of cybersecurity's golden triad of confidentiality, integrity, and availability is now focused on data rather than the on-premises systems and networks. Again, it's data that has become more important than ever before. 


Why Departments Hoard Data—and How to Get Them to Share

"Data hoarding within organizations can be attributed to a combination of cultural, operational and psychological factors," said Jon Morgan, CEO and editor-in-chief of Venture Smarter, a consulting firm in San Francisco. "When departments view data as a source of power or control, they are less inclined to share it with others, fearing that it might diminish their influence." Operational inefficiencies can also lead to data hoarding. "If access to data is cumbersome or time-consuming, employees may be less motivated to share it, preferring to keep it close for their own convenience," Morgan said. In addition, "psychological factors like fear of criticism or a desire to protect one's domain can also drive data hoarding." Employees may worry that sharing data will expose their mistakes or weaknesses, leading to a reluctance to collaborate, he said. Jace McLean, senior director of strategic architecture at Domo, a data platform based in American Fork, Utah, said he believes that cultural factors are the most important lever to use in changing data-hoarding habits. 



Quote for the day:

"If you don't demonstrate leadership character, your skills and your results will be discounted, if not dismissed." -- Mark Miller

Daily Tech Digest - November 24, 2023

How American Express Created an Open Source Program Office

American Express has established an open source program office that gamifies the safe development of open source code that can be poured back into the community. “Without the program existing, a lot of people at the company wouldn’t know about giving back to open source, they wouldn’t see the power in it,” said Amanda Chesin, software engineer at American Express, during a presentation at OSFF. The AmEx OSPO started as an informal group of developers trying to establish a symbiotic relationship with the open source community, said Tim Klever, vice president of the development experience at AmEx, at the conference. The first step was to convince the skeptical upper management of the value of open source. Security issues were the single largest concern among 56% of executives surveyed by FINOS. That was followed by quality of components, compliance with external regulations, and licensing of intellectual properties. ... “That’s really when we kind of became official because we had someone to worry about this stuff and work on it the whole time, even though we only got [her] for a summer,” Klever said.


Navigating the uncharted waters of the Digital Protection Act 2023: Overcoming unsolicited challenges in the digital realm

Of particular note is the provision for grievance redressal, affording individuals a legal avenue to hold data fiduciaries accountable. However, in contrast to the penalties imposed on data fiduciaries for non-compliance, the Data Protection Board's authority to levy fines on data principals (for violations of duties not to file frivolous complaints or impersonate others) is limited to a modest sum of up to ₹ 10,000. This duality poses a significant concern, as it introduces the possibility of groundless complaints. A successful complaint can yield a substantial ₹ 200 crore award, while an unsuccessful one carries a comparatively nominal penalty of ₹ 10,000. This dynamic could lead to an influx of speculative claims and an environment of undue frustration. There may be merit in revisiting the penalty structure, aligning it with the sum initially sought by the complainant to ensure the integrity of the complaint forum. One notable absence in the Act is the 'right to be forgotten', a provision in comparable digital data protection legislations like the GDPR. 


Could edge computing unlock AI’s vast potential?

Beyond the increased performance that AI applications demand, a key benefit of the edge model is reliability and resilience. Consumers have taken to AI, with 73% worldwide saying they trust content produced by generative AI, and 43% keen for organizations to implement generative AI throughout customer interactions. Businesses that can’t keep their AI-powered services running will suffer from declining customer satisfaction and even a drop in market share. When a traditional data center suffers a power outage – perhaps due to a grid failure or natural disaster – apps reliant on these centralized data centers simply cannot function. Edge computing avoids this single point of failure: with compute more distributed, smart networks can instead use the processing power nearest to them to keep functioning. There are also benefits when it comes to data governance. If sensitive data is processed at the edge of the network, it doesn’t need to be processed in a public cloud or centralized data center, meaning fewer opportunities to steal data at rest or in transit. ... Finally, there are cost savings to think about. Cloud service providers often charge businesses to transfer data from their cloud storage.


Cloud security and devops have work to do

First, they are not given the budget to plug up these vulnerabilities. In some instances, this is true. Cloud and development security are often underfunded. However, in most cases, the funding is good or great relative to their peers, and the problems still exist. Second, they can’t find the talent they need. For the most part, this is also legit. I figure that there are 10 security and development security positions that are chasing a single qualified candidate. As I talked about in my last post, we need to solve this. Despite the forces pushing against you, there are some recommended courses of action. CISOs should be able to capture metrics demonstrating risks and communicate them to executives and the board. Those are hard conversations but necessary if you’re looking to take on these issues as an executive team and reduce the impact on you and the development teams when stuff hits the fan. In many instances, the C-levels and the boards consider this a ploy to get more budget—that needs to be dealt with as well. Actions that can remove some of this risk include continuous security training for software development teams. 


Windows-as-an-app is coming

Windows App, which is still in beta, will let you connect to Azure Virtual Desktop, Windows 365, Microsoft Dev Box, Remote Desktop Services, and remote PCs from, well, pretty much any computing device. Specifically, you can use it from Macs, iPhones, iPads, other Windows machines, and — pay attention! — web browsers. That last part means you'll be able to run Windows from Linux-powered PCs, Chromebooks, and Android phones and tablets. So, if you've been stuck running Windows because your boss insists that you can't get your job done from a Chromebook, Linux PC, or Mac, your day has come. You can still run the machine you want and use Windows for only those times you require Windows-specific software. Mind you, you've been able to do that for some time. As I pointed out recently, all the Windows software vendors don't want you to run standalone Windows applications; they prefer web-based Software-as-a-Service (SaaS) applications. They can make a lot more money from you by insisting you pay a monthly subscription rather than a one-time payment. Sure, Microsoft made its first billions from Windows and the PC desktop, but that hasn't been its business plan for years now.


Q-Learning: Advancing Towards AGI and Artificial Superintelligence (ASI) through Reinforcement Learning

At its essence, Q-learning is akin to introducing a reward system to a computer, aiding it in deciphering the most effective strategies for playing a game. This process involves defining various actions that a computer can take in a given situation or state, such as moving left, right, up, or down in a video game. These actions and states are meticulously logged in what is commonly referred to as a Q-table. The Q-table serves as the computer’s playground for learning, where it keeps tabs on the quality (Q-value) of each action in every state. Initially, it’s comparable to a blank canvas – the computer embarks on this journey without prior knowledge of which actions will lead to optimal results. The adventure commences with exploration. The computer takes a plunge into trying out different actions randomly, navigating the game environment, and recording the outcomes in the Q-table. Think of it as the computer playfully experimenting and gradually figuring out the lay of the land. Learning from Rewards forms the core of Q-learning. Each time the computer takes an action, it earns a reward. 


ChatGPT Use Sparks Code Development Risks

Randy Watkins, CTO at Critical Start, advises organizations to build their own policies and methodology when it comes to the implementation of AI-generated code into their software development practices. “In addition to some of the standard coding best practices and technologies like static and dynamic-code analysis and secure CI/CD practices, organizations should continue to monitor the software development and security space for advancements in the space,” he told InformationWeek via email. He says organizations should leverage AI-generated code as a starting point but tap human developers to review and refine the code to ensure it meets standards. John Bambenek, principal threat hunter at Netenrich, adds leadership needs to “value secure code”, make sure that at least automated testing is part of all code going to production. “Ultimately, many of the risks of generative AI code can be solved with effective and thorough mandatory testing,” he noted in an email. He explains as part of the CI/CD pipeline, ensure mandatory testing is done on all production commits and routine comprehensive assessment is done on the entire codebase.


6 common problems with open source code integration

Closed source software is typically maintained, updated and patched exclusively by the software vendors, which can be a big benefit for development teams who lack the time, resources or expertise to do it themselves. Some open source platforms receive active support from proprietary software vendors, such as Red Hat Enterprise Linux and commercial distributions of Kubernetes. For the most part, however, organizations that deploy open source software are responsible for ensuring it remains updated. Failure to do so carries the risk of running outdated code that is buggy or has security vulnerabilities. This challenge is exacerbated by a lack of centralized management consoles or automated update processes that can help ensure all the open source components in use are up to date -- something often highlighted as an advantage of paying the price for proprietary software suites. This is another reason SCA tools are crucial for organizations that commit to the open source approach. While these tools don't provide automated update capabilities, they help the organization track what open source components exist and what each one's current version is. 


More questions for Australia cybersecurity strategy

Fairman believes that strategies are only good if they’re successfully implemented, and committing to reporting deadlines or processes is a way to reassure everyone that the government will do its best to stick to its plan. “We have to consider the financial impact of some of those measures on businesses, and the costs they will have to bear. The economy is still very much in a recovery phase, and many businesses will probably need some sort of financial support to afford cybersecurity upgrades. A cyber-health check for SMBs is great, but if most can’t afford to fill the identified cybersecurity gaps, the plan will fail,” added Fairman. ... As the strategy outlined six shields for cybersecurity, Thompson felt that there could have also been one dedicated solely to citizen responsibility would have been a useful inclusion. ... On sharing threat intelligence in the region, Thompson, who is also the former head of information warfare for the Australian Defense Forces, said that the government’s strong focus on sovereign industry is something for which he and others have long campaigned.


AI and contextual threat intelligence reshape defense strategies

Cybersixgill believes that in 2024, threat actors will use AI to increase the frequency and accuracy of their activities by automating large-scale cyberattacks, creating duplicitous phishing email campaigns, and developing malicious content targeting companies, employees, and customers. Malicious attacks like data poisoning and vulnerability exploitation in AI models will also gain momentum, which cause organizations to provide sensitive information to untrustworthy parties unwittingly. Similarly, AI models can be trained to identify and exploit vulnerabilities in computer networks without detection. Cybersixgill also predicts the rise of shadow generative AI, where employees use AI tools without organizational approval or oversight. Shadow generative AI can lead to data leaks, compromised accounts, and widening vulnerability gaps in a company’s attack surface. ... The C-suite and other executives will need a clearer understanding of their organization’s cybersecurity policies, processes, and tools. Cybersixgill believes companies will increasingly appoint cybersecurity experts on the Board to fulfill progressively stringent reporting requirements and conduct good cyber governance.



Quote for the day:

"Remember, teamwork begins by building trust. And the only way to do that is to overcome our need for invulnerability." -- Patrick Lencioni

Daily Tech Digest - November 23, 2023

Web Shells Gain Sophistication for Stealth, Persistence

One reason attackers have taken to Web shells is because of their ability to stay under the radar. Web shells are hard to detect with static analysis techniques, because the files and code are so easy to modify. Moreover, Web shell traffic — because it is just HTTP or HTTPS — blends right in, making it hard to detect with traffic analysis, says Akamai's Zavodchik. "They communicate on the same ports, and it's just another page of the website," he says. "It's not like the classic malware that will open the connection back from the server to the attacker. The attacker just browses the website. There's no malicious connection, so no anomalous connections go from the server to the attacker." In addition, because there are so many off-the-shelf Web shells, attackers can use them without tipping off defenders as to their identity. The WSO-NG Web shell, for instance, is available on GitHub. And Kali Linux is open source; it's a Linux distribution focused on providing easy-to-use tools for red teams and offensive operations, and it provides 14 different Web shells, giving penetration testers the ability to upload and download files, execute command, and creating and querying databases and archives.


Will More Threat Actors Weaponize Cybersecurity Regulations?

Based on what has been disclosed thus far, the breach sounds relatively minor, but ALPHV’s SEC complaint throws the company into the spotlight. “The SEC won’t take a criminal’s word, but the spotlight is harsh. ALPHV's motives seem less about ransom, more about setting a precedent that intimidates,” Ferhat Dikbiyik, Ph.D., head of research at cyber risk monitoring company Black Kite, tells InformationWeek via email. “MeridianLink's challenge now is to navigate this tightrope of disclosure and investigation, all while under the public and regulatory microscope.” Dikbiyik points out that ALPHV’s SEC complaint suggests that the group may have ties in the US. The group demonstrates a strong command of English and knowledge of American corporate culture, he explains. Its knowledge of the American regulatory system is particularly indicative of potential stateside ties. “ALPHV's clear English on the dark web could be AI, but their quick SEC rule exploit? That suggests boots on the ground,” says Dikbiyik.


‘Digital Twin Brain’ Could Bridge Artificial and Biological Intelligence

“Cutting-edge advancements in neuroscience research have revealed the intricate relationship between brain structure and function, and the success of artificial neural networks has highlighted the importance of network architecture,” wrote the team. “It is now time to bring these together to better understand how intelligence emerges from the multi-scale repositories in the brain. By mathematically modeling brain activity, a systematic repository of the multi-scale brain network architecture would be very useful for pushing the biological boundary of an established model.” As that systematic repository, the team’s digital twin brain (DTB) would be capable of simulating various states of the human brain in different cognitive tasks at multiple scales, in addition to helping formulate methods for altering the state of a malfunctioning brain. ... “The advantages of this research approach lie in the fact that these methods not only simulate [biologically plausible] dynamic mechanisms of brain diseases at the neuronal scale, at the level of neural populations, and at the brain region level, but also perform virtual surgical treatments that are impossible to perform in vivo owing to experimental or ethical limitations.


How hybrid cloud and edge computing can converge in your disaster recovery strategy

Hybrid cloud and edge computing are not mutually exclusive. There has been significant growth in hybrid solutions, distributing computing intelligently to combine the benefits of cloud and edge. A bespoke hybrid approach with proper planning and management can enhance your business’s DR strategy. Hybrid cloud’s scalability allows businesses to allocate additional cloud resources during a disaster. These additional resources can be allocated to potentially replace failed edge platforms and devices, maintaining critical applications and systems that are servicing the business needs, while reducing the pressure of the recovery process. The speed benefits of dedicated resources in a hybrid cloud solution are multiplied when combined with the reduced latency and enhanced availability of edge computing. Edge devices can be used to process data locally, and cache essential data which can be recovered to a cloud platform in case of a disaster. Processing on the edge and transmitting key information to the cloud can enrich your data, and inform your DR planning.


Gaining Leadership Support for Data Governance

There is no better way to showcase positive business outcomes than by tracking the ways in which good governance can help tackle obstacles over time. The most obvious of such tracking methods is a data audit. Though an audit may be slightly daunting in terms of its invasiveness in operations, it can be indispensable in uncovering lapses in data quality and risky security gaps in storage and retention. You can cover much of the same territory more informally – and less invasively – through interviews and surveys with stakeholders in the company. With a more open-ended, personalized intake of challenges in governance, these modes of recording can capture the nuances that arise in data integration and glitches in system compatibility, and they’re more likely to harvest the sorts of idiosyncratic insights that might fall through the cracks of a formal audit. Indeed, while Seiner advocates for methods of recording that fall on the more facts-and-figures end of the spectrum – single-issue tracking, analytics, and monitoring – he finds that “one of the most successful ways of doing assessments is simply to talk to people.


Optimizing Risk Transfer for Systematic Cyberresilience

As cyberthreats loom large, enterprises of all sizes are increasingly recognizing the need for cyberinsurance. Cyberinsurance offers financial protection and support in the event of cyberattacks or data breaches. It is predicted that by 2040, the cyberrisk transfer market will become comparable in size to property insurance. However, navigating the cyberinsurance market can be complex and daunting. Understanding the key considerations and making informed decisions are crucial to ensuring adequate coverage and effective risk management. ... In this context, alternative risk transfer solutions such as the use of captive fronting are emerging as crucial tools for managing and transferring cyberrisk. By leveraging a captive solution, enterprises can enhance their cyberresilience, mitigate potential financial losses and navigate cyberinsurance more effectively. Captives help increase the attachment point for the insurance market and act as a solution to cover gaps in the insurance market’s capacity. Insurers are increasingly encouraging the use of captives for cyber.


6 green coding best practices and how to get started

While opting for SaaS dev and test tools may be generally more efficient than installing them to run on servers, cloud apps can still suffer from bloat. Modern DevSecOps tools often create full test environments and run all automated checks on every commit. They can also run full security scans, code linters and complexity analyzers, and stand up entire databases in the cloud. When the team merges the code, they do it all over again. Some systems run on a delay and automatically restart, perhaps on dozens of monitored branches. Observability tools to monitor everything can lead to processing bloat and network saturation. For example, imagine a scenario where the team activates an observability system for all testing. Each time network traffic occurs, the observability system messages a server about what information goes where -- essentially doubling the test traffic. The energy consumed is essentially wasted. At best, the test servers run slowly for little benefit. At worst, the production servers are also affected, and outages occur.


Australia ups ante on cyber security

“The government’s ‘health check’ programme announcement is a valiant effort – the true test will be how it goes about educating the right people across an extremely diverse SMB landscape. ‘Concierge-style’ support only goes so far, particularly if it doesn’t know where to go, and businesses don’t understand why to seek it out. “The problem is SMBs don’t know how to start conversations, nor who to turn to. Working alone makes the cost of cyber security defences untenable, but it doesn’t have to be this way. Your local florist, corner store, or even the grassroots neighbourhood startup can contribute to building Australia’s resilience; they need the education to know why and how to be government compliant, fight increasing cyber insurance premium costs, and protect their customers’ PII [personally identifiable information] data.” On the law enforcement side, Operation Aquila will be stepped up to target the highest priority cybercrime threats affecting Australia, and increased global cooperation will be sought to address cybercrime, particularly through regional forums such as the Pacific Islands Law Officers’ Network and the ASEAN Senior Officials Meeting on Transnational Crime.


CISA Roadmap for AI Cybersecurity: Defense of Critical Infrastructure, “Secure by Design” AI Prioritized

The first “line of effort” is a pledge to responsibly use AI to support the mission, establishing governance and adoption procedures primarily for federal agencies. Already at the head of federal cybersecurity programs, CISA will be the conduit for the development of processes from safety to procurement to ethics and civil rights. In terms of privacy and security, the agency will be developing the NIST AI Risk Management Framework (RMF). The agency is also creating an AI Use Case Inventory to be used in mission support, and to responsibly and securely deploy new systems. The second line of effort directly addresses security by design. This is another area in which establishment and use of the RMF will be a key step, and assessing the AI cybersecurity risks in critical infrastructure sectors is the first item on the menu. This process also appears to involve early engagement with stakeholders in critical infrastructure sectors. Software Bills of Materials (SBOMs) for AI systems will also be a requirement in some capacity, though CISA is in an “evaluation” phase at this point.


How to work with Your Auditors to Influence a Better Audit Experience

Remember, auditing with agility is a flexible, customizable audit approach that leverages concepts from agile and DevOps to create a more value-added and efficient audit. There are three core components to auditing with agility:Value-driven auditing, where the scope of audit work is driven by what’s most important to the organization Integrated auditing, where audit work is integrated with your daily work Adaptable auditing, where audits become nimble and can adapt to change Each core component has practices associated with it. For example, practices associated with value-driven auditing include satisfying stakeholders through value delivery. In my book, Beyond Agile Auditing, I state that stakeholders "value audit work that is focused on the highest, most relevant risks and the areas that are important to achieving the organization’s objectives.[1]" As an auditor, I like to ask my clients questions like "What absolutely needs to go right for you (or your business) to be successful?" or "What can’t go wrong for you (or your business) to be successful?" I do this to help identify what matters and what is most valuable to my client’s business.



Quote for the day:

“Good manners sometimes means simply putting up with other people's bad manners.” -- H. Jackson Brown, Jr