Daily Tech Digest - November 24, 2023

How American Express Created an Open Source Program Office

American Express has established an open source program office that gamifies the safe development of open source code that can be poured back into the community. “Without the program existing, a lot of people at the company wouldn’t know about giving back to open source, they wouldn’t see the power in it,” said Amanda Chesin, software engineer at American Express, during a presentation at OSFF. The AmEx OSPO started as an informal group of developers trying to establish a symbiotic relationship with the open source community, said Tim Klever, vice president of the development experience at AmEx, at the conference. The first step was to convince the skeptical upper management of the value of open source. Security issues were the single largest concern among 56% of executives surveyed by FINOS. That was followed by quality of components, compliance with external regulations, and licensing of intellectual properties. ... “That’s really when we kind of became official because we had someone to worry about this stuff and work on it the whole time, even though we only got [her] for a summer,” Klever said.


Navigating the uncharted waters of the Digital Protection Act 2023: Overcoming unsolicited challenges in the digital realm

Of particular note is the provision for grievance redressal, affording individuals a legal avenue to hold data fiduciaries accountable. However, in contrast to the penalties imposed on data fiduciaries for non-compliance, the Data Protection Board's authority to levy fines on data principals (for violations of duties not to file frivolous complaints or impersonate others) is limited to a modest sum of up to ₹ 10,000. This duality poses a significant concern, as it introduces the possibility of groundless complaints. A successful complaint can yield a substantial ₹ 200 crore award, while an unsuccessful one carries a comparatively nominal penalty of ₹ 10,000. This dynamic could lead to an influx of speculative claims and an environment of undue frustration. There may be merit in revisiting the penalty structure, aligning it with the sum initially sought by the complainant to ensure the integrity of the complaint forum. One notable absence in the Act is the 'right to be forgotten', a provision in comparable digital data protection legislations like the GDPR. 


Could edge computing unlock AI’s vast potential?

Beyond the increased performance that AI applications demand, a key benefit of the edge model is reliability and resilience. Consumers have taken to AI, with 73% worldwide saying they trust content produced by generative AI, and 43% keen for organizations to implement generative AI throughout customer interactions. Businesses that can’t keep their AI-powered services running will suffer from declining customer satisfaction and even a drop in market share. When a traditional data center suffers a power outage – perhaps due to a grid failure or natural disaster – apps reliant on these centralized data centers simply cannot function. Edge computing avoids this single point of failure: with compute more distributed, smart networks can instead use the processing power nearest to them to keep functioning. There are also benefits when it comes to data governance. If sensitive data is processed at the edge of the network, it doesn’t need to be processed in a public cloud or centralized data center, meaning fewer opportunities to steal data at rest or in transit. ... Finally, there are cost savings to think about. Cloud service providers often charge businesses to transfer data from their cloud storage.


Cloud security and devops have work to do

First, they are not given the budget to plug up these vulnerabilities. In some instances, this is true. Cloud and development security are often underfunded. However, in most cases, the funding is good or great relative to their peers, and the problems still exist. Second, they can’t find the talent they need. For the most part, this is also legit. I figure that there are 10 security and development security positions that are chasing a single qualified candidate. As I talked about in my last post, we need to solve this. Despite the forces pushing against you, there are some recommended courses of action. CISOs should be able to capture metrics demonstrating risks and communicate them to executives and the board. Those are hard conversations but necessary if you’re looking to take on these issues as an executive team and reduce the impact on you and the development teams when stuff hits the fan. In many instances, the C-levels and the boards consider this a ploy to get more budget—that needs to be dealt with as well. Actions that can remove some of this risk include continuous security training for software development teams. 


Windows-as-an-app is coming

Windows App, which is still in beta, will let you connect to Azure Virtual Desktop, Windows 365, Microsoft Dev Box, Remote Desktop Services, and remote PCs from, well, pretty much any computing device. Specifically, you can use it from Macs, iPhones, iPads, other Windows machines, and — pay attention! — web browsers. That last part means you'll be able to run Windows from Linux-powered PCs, Chromebooks, and Android phones and tablets. So, if you've been stuck running Windows because your boss insists that you can't get your job done from a Chromebook, Linux PC, or Mac, your day has come. You can still run the machine you want and use Windows for only those times you require Windows-specific software. Mind you, you've been able to do that for some time. As I pointed out recently, all the Windows software vendors don't want you to run standalone Windows applications; they prefer web-based Software-as-a-Service (SaaS) applications. They can make a lot more money from you by insisting you pay a monthly subscription rather than a one-time payment. Sure, Microsoft made its first billions from Windows and the PC desktop, but that hasn't been its business plan for years now.


Q-Learning: Advancing Towards AGI and Artificial Superintelligence (ASI) through Reinforcement Learning

At its essence, Q-learning is akin to introducing a reward system to a computer, aiding it in deciphering the most effective strategies for playing a game. This process involves defining various actions that a computer can take in a given situation or state, such as moving left, right, up, or down in a video game. These actions and states are meticulously logged in what is commonly referred to as a Q-table. The Q-table serves as the computer’s playground for learning, where it keeps tabs on the quality (Q-value) of each action in every state. Initially, it’s comparable to a blank canvas – the computer embarks on this journey without prior knowledge of which actions will lead to optimal results. The adventure commences with exploration. The computer takes a plunge into trying out different actions randomly, navigating the game environment, and recording the outcomes in the Q-table. Think of it as the computer playfully experimenting and gradually figuring out the lay of the land. Learning from Rewards forms the core of Q-learning. Each time the computer takes an action, it earns a reward. 


ChatGPT Use Sparks Code Development Risks

Randy Watkins, CTO at Critical Start, advises organizations to build their own policies and methodology when it comes to the implementation of AI-generated code into their software development practices. “In addition to some of the standard coding best practices and technologies like static and dynamic-code analysis and secure CI/CD practices, organizations should continue to monitor the software development and security space for advancements in the space,” he told InformationWeek via email. He says organizations should leverage AI-generated code as a starting point but tap human developers to review and refine the code to ensure it meets standards. John Bambenek, principal threat hunter at Netenrich, adds leadership needs to “value secure code”, make sure that at least automated testing is part of all code going to production. “Ultimately, many of the risks of generative AI code can be solved with effective and thorough mandatory testing,” he noted in an email. He explains as part of the CI/CD pipeline, ensure mandatory testing is done on all production commits and routine comprehensive assessment is done on the entire codebase.


6 common problems with open source code integration

Closed source software is typically maintained, updated and patched exclusively by the software vendors, which can be a big benefit for development teams who lack the time, resources or expertise to do it themselves. Some open source platforms receive active support from proprietary software vendors, such as Red Hat Enterprise Linux and commercial distributions of Kubernetes. For the most part, however, organizations that deploy open source software are responsible for ensuring it remains updated. Failure to do so carries the risk of running outdated code that is buggy or has security vulnerabilities. This challenge is exacerbated by a lack of centralized management consoles or automated update processes that can help ensure all the open source components in use are up to date -- something often highlighted as an advantage of paying the price for proprietary software suites. This is another reason SCA tools are crucial for organizations that commit to the open source approach. While these tools don't provide automated update capabilities, they help the organization track what open source components exist and what each one's current version is. 


More questions for Australia cybersecurity strategy

Fairman believes that strategies are only good if they’re successfully implemented, and committing to reporting deadlines or processes is a way to reassure everyone that the government will do its best to stick to its plan. “We have to consider the financial impact of some of those measures on businesses, and the costs they will have to bear. The economy is still very much in a recovery phase, and many businesses will probably need some sort of financial support to afford cybersecurity upgrades. A cyber-health check for SMBs is great, but if most can’t afford to fill the identified cybersecurity gaps, the plan will fail,” added Fairman. ... As the strategy outlined six shields for cybersecurity, Thompson felt that there could have also been one dedicated solely to citizen responsibility would have been a useful inclusion. ... On sharing threat intelligence in the region, Thompson, who is also the former head of information warfare for the Australian Defense Forces, said that the government’s strong focus on sovereign industry is something for which he and others have long campaigned.


AI and contextual threat intelligence reshape defense strategies

Cybersixgill believes that in 2024, threat actors will use AI to increase the frequency and accuracy of their activities by automating large-scale cyberattacks, creating duplicitous phishing email campaigns, and developing malicious content targeting companies, employees, and customers. Malicious attacks like data poisoning and vulnerability exploitation in AI models will also gain momentum, which cause organizations to provide sensitive information to untrustworthy parties unwittingly. Similarly, AI models can be trained to identify and exploit vulnerabilities in computer networks without detection. Cybersixgill also predicts the rise of shadow generative AI, where employees use AI tools without organizational approval or oversight. Shadow generative AI can lead to data leaks, compromised accounts, and widening vulnerability gaps in a company’s attack surface. ... The C-suite and other executives will need a clearer understanding of their organization’s cybersecurity policies, processes, and tools. Cybersixgill believes companies will increasingly appoint cybersecurity experts on the Board to fulfill progressively stringent reporting requirements and conduct good cyber governance.



Quote for the day:

"Remember, teamwork begins by building trust. And the only way to do that is to overcome our need for invulnerability." -- Patrick Lencioni

Daily Tech Digest - November 23, 2023

Web Shells Gain Sophistication for Stealth, Persistence

One reason attackers have taken to Web shells is because of their ability to stay under the radar. Web shells are hard to detect with static analysis techniques, because the files and code are so easy to modify. Moreover, Web shell traffic — because it is just HTTP or HTTPS — blends right in, making it hard to detect with traffic analysis, says Akamai's Zavodchik. "They communicate on the same ports, and it's just another page of the website," he says. "It's not like the classic malware that will open the connection back from the server to the attacker. The attacker just browses the website. There's no malicious connection, so no anomalous connections go from the server to the attacker." In addition, because there are so many off-the-shelf Web shells, attackers can use them without tipping off defenders as to their identity. The WSO-NG Web shell, for instance, is available on GitHub. And Kali Linux is open source; it's a Linux distribution focused on providing easy-to-use tools for red teams and offensive operations, and it provides 14 different Web shells, giving penetration testers the ability to upload and download files, execute command, and creating and querying databases and archives.


Will More Threat Actors Weaponize Cybersecurity Regulations?

Based on what has been disclosed thus far, the breach sounds relatively minor, but ALPHV’s SEC complaint throws the company into the spotlight. “The SEC won’t take a criminal’s word, but the spotlight is harsh. ALPHV's motives seem less about ransom, more about setting a precedent that intimidates,” Ferhat Dikbiyik, Ph.D., head of research at cyber risk monitoring company Black Kite, tells InformationWeek via email. “MeridianLink's challenge now is to navigate this tightrope of disclosure and investigation, all while under the public and regulatory microscope.” Dikbiyik points out that ALPHV’s SEC complaint suggests that the group may have ties in the US. The group demonstrates a strong command of English and knowledge of American corporate culture, he explains. Its knowledge of the American regulatory system is particularly indicative of potential stateside ties. “ALPHV's clear English on the dark web could be AI, but their quick SEC rule exploit? That suggests boots on the ground,” says Dikbiyik.


‘Digital Twin Brain’ Could Bridge Artificial and Biological Intelligence

“Cutting-edge advancements in neuroscience research have revealed the intricate relationship between brain structure and function, and the success of artificial neural networks has highlighted the importance of network architecture,” wrote the team. “It is now time to bring these together to better understand how intelligence emerges from the multi-scale repositories in the brain. By mathematically modeling brain activity, a systematic repository of the multi-scale brain network architecture would be very useful for pushing the biological boundary of an established model.” As that systematic repository, the team’s digital twin brain (DTB) would be capable of simulating various states of the human brain in different cognitive tasks at multiple scales, in addition to helping formulate methods for altering the state of a malfunctioning brain. ... “The advantages of this research approach lie in the fact that these methods not only simulate [biologically plausible] dynamic mechanisms of brain diseases at the neuronal scale, at the level of neural populations, and at the brain region level, but also perform virtual surgical treatments that are impossible to perform in vivo owing to experimental or ethical limitations.


How hybrid cloud and edge computing can converge in your disaster recovery strategy

Hybrid cloud and edge computing are not mutually exclusive. There has been significant growth in hybrid solutions, distributing computing intelligently to combine the benefits of cloud and edge. A bespoke hybrid approach with proper planning and management can enhance your business’s DR strategy. Hybrid cloud’s scalability allows businesses to allocate additional cloud resources during a disaster. These additional resources can be allocated to potentially replace failed edge platforms and devices, maintaining critical applications and systems that are servicing the business needs, while reducing the pressure of the recovery process. The speed benefits of dedicated resources in a hybrid cloud solution are multiplied when combined with the reduced latency and enhanced availability of edge computing. Edge devices can be used to process data locally, and cache essential data which can be recovered to a cloud platform in case of a disaster. Processing on the edge and transmitting key information to the cloud can enrich your data, and inform your DR planning.


Gaining Leadership Support for Data Governance

There is no better way to showcase positive business outcomes than by tracking the ways in which good governance can help tackle obstacles over time. The most obvious of such tracking methods is a data audit. Though an audit may be slightly daunting in terms of its invasiveness in operations, it can be indispensable in uncovering lapses in data quality and risky security gaps in storage and retention. You can cover much of the same territory more informally – and less invasively – through interviews and surveys with stakeholders in the company. With a more open-ended, personalized intake of challenges in governance, these modes of recording can capture the nuances that arise in data integration and glitches in system compatibility, and they’re more likely to harvest the sorts of idiosyncratic insights that might fall through the cracks of a formal audit. Indeed, while Seiner advocates for methods of recording that fall on the more facts-and-figures end of the spectrum – single-issue tracking, analytics, and monitoring – he finds that “one of the most successful ways of doing assessments is simply to talk to people.


Optimizing Risk Transfer for Systematic Cyberresilience

As cyberthreats loom large, enterprises of all sizes are increasingly recognizing the need for cyberinsurance. Cyberinsurance offers financial protection and support in the event of cyberattacks or data breaches. It is predicted that by 2040, the cyberrisk transfer market will become comparable in size to property insurance. However, navigating the cyberinsurance market can be complex and daunting. Understanding the key considerations and making informed decisions are crucial to ensuring adequate coverage and effective risk management. ... In this context, alternative risk transfer solutions such as the use of captive fronting are emerging as crucial tools for managing and transferring cyberrisk. By leveraging a captive solution, enterprises can enhance their cyberresilience, mitigate potential financial losses and navigate cyberinsurance more effectively. Captives help increase the attachment point for the insurance market and act as a solution to cover gaps in the insurance market’s capacity. Insurers are increasingly encouraging the use of captives for cyber.


6 green coding best practices and how to get started

While opting for SaaS dev and test tools may be generally more efficient than installing them to run on servers, cloud apps can still suffer from bloat. Modern DevSecOps tools often create full test environments and run all automated checks on every commit. They can also run full security scans, code linters and complexity analyzers, and stand up entire databases in the cloud. When the team merges the code, they do it all over again. Some systems run on a delay and automatically restart, perhaps on dozens of monitored branches. Observability tools to monitor everything can lead to processing bloat and network saturation. For example, imagine a scenario where the team activates an observability system for all testing. Each time network traffic occurs, the observability system messages a server about what information goes where -- essentially doubling the test traffic. The energy consumed is essentially wasted. At best, the test servers run slowly for little benefit. At worst, the production servers are also affected, and outages occur.


Australia ups ante on cyber security

“The government’s ‘health check’ programme announcement is a valiant effort – the true test will be how it goes about educating the right people across an extremely diverse SMB landscape. ‘Concierge-style’ support only goes so far, particularly if it doesn’t know where to go, and businesses don’t understand why to seek it out. “The problem is SMBs don’t know how to start conversations, nor who to turn to. Working alone makes the cost of cyber security defences untenable, but it doesn’t have to be this way. Your local florist, corner store, or even the grassroots neighbourhood startup can contribute to building Australia’s resilience; they need the education to know why and how to be government compliant, fight increasing cyber insurance premium costs, and protect their customers’ PII [personally identifiable information] data.” On the law enforcement side, Operation Aquila will be stepped up to target the highest priority cybercrime threats affecting Australia, and increased global cooperation will be sought to address cybercrime, particularly through regional forums such as the Pacific Islands Law Officers’ Network and the ASEAN Senior Officials Meeting on Transnational Crime.


CISA Roadmap for AI Cybersecurity: Defense of Critical Infrastructure, “Secure by Design” AI Prioritized

The first “line of effort” is a pledge to responsibly use AI to support the mission, establishing governance and adoption procedures primarily for federal agencies. Already at the head of federal cybersecurity programs, CISA will be the conduit for the development of processes from safety to procurement to ethics and civil rights. In terms of privacy and security, the agency will be developing the NIST AI Risk Management Framework (RMF). The agency is also creating an AI Use Case Inventory to be used in mission support, and to responsibly and securely deploy new systems. The second line of effort directly addresses security by design. This is another area in which establishment and use of the RMF will be a key step, and assessing the AI cybersecurity risks in critical infrastructure sectors is the first item on the menu. This process also appears to involve early engagement with stakeholders in critical infrastructure sectors. Software Bills of Materials (SBOMs) for AI systems will also be a requirement in some capacity, though CISA is in an “evaluation” phase at this point.


How to work with Your Auditors to Influence a Better Audit Experience

Remember, auditing with agility is a flexible, customizable audit approach that leverages concepts from agile and DevOps to create a more value-added and efficient audit. There are three core components to auditing with agility:Value-driven auditing, where the scope of audit work is driven by what’s most important to the organization Integrated auditing, where audit work is integrated with your daily work Adaptable auditing, where audits become nimble and can adapt to change Each core component has practices associated with it. For example, practices associated with value-driven auditing include satisfying stakeholders through value delivery. In my book, Beyond Agile Auditing, I state that stakeholders "value audit work that is focused on the highest, most relevant risks and the areas that are important to achieving the organization’s objectives.[1]" As an auditor, I like to ask my clients questions like "What absolutely needs to go right for you (or your business) to be successful?" or "What can’t go wrong for you (or your business) to be successful?" I do this to help identify what matters and what is most valuable to my client’s business.



Quote for the day:

“Good manners sometimes means simply putting up with other people's bad manners.” -- H. Jackson Brown, Jr

Daily Tech Digest - November 22, 2023

What It Means to Be a Software Architect — and Why It Matters

One of the misperceptions that architects face is that we are engaging in architecture for architecture’s sake, or that we propose new technologies mainly because of the “coolness” factor. Our challenge is to counter this misperception by arguing not merely for the aesthetic value of good design, but for the pragmatic, economic value. We need to frame the need for intentional design as something that can save the company significant costs by averting disadvantageous technology and design choices, producing a distinct competitive edge through market differentiation and paving the way for increased customer satisfaction. ... My commentary on some of Martin Fowler’s views of software architecture is not intended to paint a complete picture of this important role and how it differs from other types of architects. Rather, I’ve sought to highlight the importance of designing the structure of a system at the code level to ensure that the application of relevant patterns results in a design that can sustain cumulative functionality over time, increasing business value while reducing time to market.


5 Ways to Supercharge Incident Remediation with Automation

It’s a balance between your confidence in the automation, the value or cost of the incident and the frequency the task occurs. Common incidents with proven automated steps for diagnosis and remediation are good opportunities to trigger with AIOps. From there, follow a similar process to prioritize your incident response. Automate diagnosis and remediation steps for serious outages to speed resolution. Then focus on increasing efficiency by automating recurring diagnostics and remediation actions that occur across many kinds of incidents. You can safely automate and trigger lower-risk actions such as read-only diagnostic pulls with AIOps, giving downstream personnel the information they need, even when they are paged. You can automate common remediation actions and make them available to responders to use. This automation can utilize secrets management tools such as Vault to enable privileged actions in production environments without sharing credentials, making it safer to delegate to responders. 


Tech Pros Quitting Over Salary Stagnation, Stress

Gartner Vice President Analyst Lily Mok told InformationWeek via email CIOs should work with their recruitment and compensation teams to identify IT roles and skills areas facing higher attrition risk and recruitment challenges due to noncompetitive compensation. “This will help pinpoint additional funding will be needed in the short term to address pay gaps,” she says. “Organizations with limited financial resources should prioritize allocating increases to high-risk areas.” She also recommends conducting spot-checks of the market pay conditions on at least a quarterly basis and updating pay benchmarks for key IT roles and skills areas with more recent data. “At very least, I would recommend annual review of market pay levels for key IT jobs and skills area,” Mok adds. ... Another suggestion is to create a separate salary structure for IT, an approach that helps avoid force-fitting IT jobs into enterprise wide pay grades that often place a higher weight on internal equity than external competitiveness when valuing jobs across different functions.


Unlocking Cyber Resilience: The Role Of SBOMs In Cybersecurity

Implementing an SBOM strategy is a step towards fortifying your cybersecurity defenses. While having a list of the components that make up your software supply chain is better than not having one, context is also crucial. You don’t just want to know that you have a given code module—but all of the associated data as well. Vulnerabilities and exploits tend to effect specific versions, so you need to know the details of the versions in your environment, the year and date the code was released, where and how the code is used, etc. Automation is essential. It’s impractical, bordering on impossible to try and manage or maintain an accurate SBOM through any manual process. By automating the SBOM generation and maintenance, the margin for human error diminishes, the speed of response accelerates, and organizations can scale their security practices as they grow. Compliance is another piece of the puzzle. Your SBOM solution should align with industry standards and regulatory requirements, ensuring that you aren't just secure, but also compliant. 


CISOs can marry security and business success

While businesses aim for different outcomes, one goal that the business typically prescribes for cybersecurity is business continuity. This is probably due to most executives viewing cybersecurity only as an operational necessity. At the same time, they fail to see cybersecurity’s essential contribution to the due diligence aspect of the procurement process. The complexity and length of procurement processes have increased over the years, as prospective clients use this as part of their third-party risk management. Executives that are aware of clients’ needs can use them to improve the cybersecurity of the organization and its offerings, by translating them into features that will raise the offering’s competitive advantage. Traditionally, R&D and innovation teams perceive the CISO’s role as an obstacle to innovation and advancement. Conventional security entities frequently resort to phrases like “this can’t be done due to security protocols,” obstructing changes to existing infrastructure and impeding innovation. If security is confined to an IT concern rather than recognized as a business imperative, CISOs struggle to emerge as strategic partners.


Advanced Applications of Open-Source Technologies

The Evolution of Open-Source Culture The widespread adoption of open-source technologies is attributed to the culture and philosophy underpinning the open-source movement. Early pioneers in the open-source community championed the belief in the transformative power of collaborative, community-driven efforts and unrestricted access to software source code. For young developers exploring careers, open source presents exciting opportunities. Contributing to open-source projects enables developers to hone their skills, gain visibility, and engage with mentorship from experienced professionals. ... Demonstrated by Brazil’s Amazonia-1 satellite program, Julia is instrumental in in-orbit sensor calibration, showcasing its adaptability beyond conventional software development. NASA, a leader in space exploration, also utilises Julia for various purposes, including gaining insights into the intricacies of Earth’s oceans. This strategic adoption of open-source technology highlights its pivotal role as more than just a developer’s tool, serving as a crucial enabler to tackle real-world challenges on a global scale.


Generative AI is a developer's delight. Now, let's find some other use cases

"We aren't surprised that the most common application of generative AI is in programming, using tools like GitHub Copilot or ChatGPT," Mike Loukides, author of the O'Reilly report, writes. "However, we are surprised at the level of adoption." There is also evidence of a healthy tools ecosystem that has already sprung up around generative AI, the report indicates. ... "Automating the process of building complex prompts has become common, with patterns like retrieval-augmented generation (RAG) and tools like LangChain. And there are tools for archiving and indexing prompts for reuse, vector databases for retrieving documents that an AI can use to answer a question, and much more. We're already moving into the second generation of tooling." ... "Programmers have always developed tools that would help them do their jobs, from test frameworks to source control to integrated development environments. Programmers will do what's necessary to get the job done, and managers will be blissfully unaware as long as their teams are more productive and goals are being met."


In the symphony of enterprise, every business today dances to the silent tune of technology

Not all AI applications have had a positive impact – content writing and the media industry are the worst hit. It was widely believed that creative industries will be the last to be impacted by technologies like AI however the ground realities are very different. One of the demands of the striking Writers Guild of America was that AI will not encroach on writers’ credits and compensation. No matter the core product, all functions of an organization are now utilizing technology in some form or manner – planning, organizing, analysis, marketing, sales, customer engagement or service. Technology has always been a catalyst for progress, often propelling non-tech companies into new realms of efficiency and cost-effectiveness. From the industrial revolution’s steam engines to the digital age’s computers, companies outside the technology sector have harnessed innovation to transform their operations. Today, the conductor of this transformative orchestra is artificial intelligence (AI) and the darling subset – Generative AI.


5 pillars of a cloud-conscious culture

“A developer shouldn’t just provision an extra-large server and then leave it running,” says Firment. “Coders have to learn to work in a cloud native way. That requires understanding terms like elasticity, scalability, and resiliency. They need to know what we mean by multiple availability zones. Developers can still leverage their skills in the cloud, but they just have to apply them in the new way.” Building a culture is like building a tribe, and certificates are a good marker of the new tribe. They create a sense of belonging. Rituals are equally important. “As individuals get certified, create a cloud of fame,” says Firment. “That’s a great way to say you value people who develop the skills. And it’s an artifact of the new culture.” Celebrating certification is also highly effective. “Establish a weekly or monthly cloud hour, where people share what they’re learning on the way to getting certified,” he says. “Ultimately, they should share how they’re applying the knowledge and customer success stories. Storytelling is a big part of creating a culture.”


The SSO tax is killing trust in the security industry

Before some of these solutions are adopted, there are steps we can all take. If you are responsible for identity and access management at an organization, have you audited the authentication tokens you rely on to ensure they operate as expected? Have you considered what compensating controls you could put in place? Are there security products that can do that auditing for you or otherwise mitigate this risk in your environment? Do the security questionnaires your company sends to potential SaaS application providers ask how they configure authentication tokens? It is going to take a serious collaborative "security by design" effort between SSO providers, application developers, and browser companies to repair the broken SSO environment we currently operate under. We single out application providers for criticism in this article because they so often charge an upgrade fee to integrate with SSO. If they are going to charge us a tax, they need to step up or share in the blame for the compromises that will continue to happen. 



Quote for the day:

"Success consists of getting up just one more time than you fall." -- Oliver Goldsmith

Daily Tech Digest - November 21, 2023

How to apply design thinking in data science

Observing end-users and recognizing different stakeholder needs is a learning process. Data scientists may feel the urge to dive right into problem-solving and prototyping but design thinking principles require a problem-definition stage before jumping into any hands-on work. “Design thinking was created to better solutions that address human needs in balance with business opportunities and technological capabilities,” says Matthew Holloway, global head of design at SnapLogic. To develop “better solutions,” data science teams must collaborate with stakeholders to define a vision statement outlining their objectives, review the questions they want analytics tools to answer, and capture how to make answers actionable. Defining and documenting this vision up front is a way to share workflow observations with stakeholders and capture quantifiable goals, which supports closed-loop learning. Equally important is to agree on priorities, especially when stakeholder groups may have common objectives but seek to optimize department-specific business workflows.


The role of big data in auditing and assurance services

Conventionally, audit judgements rely on sole evidence sourced from structured datasets in an organization’s financial records. But technological advances in data storage, processing power and analytic tools have made it easier to obtain unstructured data to support audit evidence. Big data can be used for prediction by using a complex method of analytics to glean audit evidence from datasets and other sources which encompass organizations, industries, nature, internet clicks, social media, market research and numerous other sources. ... An innovative system will not only enable the application of artificial intelligence embedded Natural Language Processing (NLP) to streamline unstructured data but also ensure its integration with an Optical Character Recognition (OCR). These capabilities and other new cutting-edge technologies will effectively help to convert both structured and unstructured data into meaningful insights to drive audit. Thus, the use of big data is to make it easier to eliminate human errors, flag risks in time and spot fraudulent transactions and, in effect, modernize audit operations, thereby improving the efficiency and accuracy of the financial reporting process.


Operators unprepared for high gains from low-power IoT roaming

A key feature supported by the latest technology is passive or ambient IoT, which aims to connect sensors and devices to cellular networks without a power source and that could dramatically increase the number of cellular IoT devices. This facet is increasingly becoming appealing to several enterprise verticals. NB-IoT and LTE-M are backed by major mobile operators, offering standardised connectivity with global reach. Yet Juniper warned that a key technical challenge faced by operators is their inefficiency in detecting low-power devices roaming on their networks, meaning that operators lose potential revenue from these undetected devices. Due to their low data usage and intermittent connectivity, these devices require constant network monitoring to fully maximise roaming revenue. ... “Operators must fully leverage the insights gained from AI-based detection tools to introduce premium billing of roaming connections to further maximise roaming revenue,” said research author Alex Webb. “This must be done by implementing roaming agreements that price roaming connectivity on network resources used and time connected to the network.”


How to improve cyber resilience by evaluating cyber risk

The biggest challenge in evaluating cyber risk is that we always underestimate it. The impact is almost always worse than what was estimated. A lot of us are professional risk mitigators and managers, and we still get it wrong. Going back to the MGM Resorts cyber attack, I refuse to believe that MGM believed that their ransomware breach was going to cost them US$1 billion in between lost revenues, lost valuation and loss of confidence from both the market and customers. That, to me, is the biggest issue. There is a huge gap there. Even though there are a lot of numbers surrounding the cost of a data breach, they still all significantly underestimate it. So that to me, I think is the biggest area. ... We are spending a lot of time talking about the tools that these actors use, whether it is artificial intelligence (AI), ransomware, hacking, national security threats and so on. To make an impact against this threat we must focus on resilience and what you can tolerate, then understanding what you can withstand and what conditions you can withstand them under. 


What Sam Altman's move to Microsoft means for ChatGPT's future: 3 possible paths forward

Microsoft acquires what's left of OpenAI and kicks OpenAI's current board of directors to the curb. Much of OpenAI's current technology runs on Azure already, so this might make a lot of sense from an infrastructure point of view. It also makes a lot of sense from a leadership point of view, given that Microsoft now has OpenAI's spiritual and, possibly soon, technical leadership. Plus, if OpenAI employees were already planning to defect, it makes a lot of sense for Microsoft to simply fold OpenAI into the company's gigantic portfolio. I think this may be the only practical way forward for OpenAI to survive. If OpenAI were to lose the bulk of its innovation team, it would be a shell operating on existing technology in a market that's running at warp speed. Competitors would rapidly outpace it. But if it were brought into Microsoft, then it can keep moving at pace, under the guidance of leadership it is already comfortable with, and continue executing on plans it already has.


Kaspersky’s Advanced Persistent Threats Predictions for 2024

Botnets are typically more prevalent in cybercrime activities compared to APT, yet Kaspersky expects the latter to start using them more. The first reason is to bring more confusion for the defense. Attacks leveraging botnets might “obscure the targeted nature of the attack behind seemingly widespread assaults,” according to the researchers. In that case, defenders might find it more challenging to attribute the attack to a threat actor and might believe they face a generic widespread attack. The second reason is to mask the attackers’ infrastructure. The botnet can act as a network of proxies, but also as intermediate command and control servers. ... The global increase in using chatbots and generative AI tools has been beneficial in many sectors over the last year. Cybercriminals and APT threat actors have started using generative AI in their activities, with large language models explicitly designed for malicious purposes. These generative AI tools lack the ethical constraints and content restrictions inherent in authentic AI implementations.


Alternative data investing: Why connected vehicle data is the future

One of the most promising subsectors of the alternative data realm is geolocation, standing at an impressive valuation of $400 million. Geolocation is prized for its ability to correlate ground-level activities to consumer trends, business health and revenue. But within this sphere, the real game-changer is ‘connected vehicle’ data. Connected vehicle data, a subset of geolocation, is an invaluable resource for investors. It enables analysis of both passenger car and truck activities across almost any location. This opens a window into consumer trends, helping investors decipher current demand dynamics before company earnings calls. Moreover, tracking truck activity provides insights into a company’s supply chain health. By monitoring truck traffic at key economic areas – be it manufacturing facilities, warehouses, distribution centers or seaports – investors can gauge a company’s production, distribution and supply chain efficiencies. This level of detail can provide a holistic view of a company’s operations and its future revenue potential.


The Potential Impact of Quantum Computing on Data Centre Infrastructure

All kinds of use cases involving complex algorithms are the candidates for being addressed using quantum computers. Use cases around financial modelling and risk analysis at macro level, environmental analysis, and climate modelling especially while undertaking the development projects in ecological sensitive areas, supply chain optimisation, life sciences, AI based drug discovery / drug repurposing, custom treatment for complex disease etc would be the candidates for using quantum computing in data centres. Apart from the above, one key area where quantum computing will impact everybody is deep fake. In a very short time, the generative AI has shown its capability of creating fake videos of anybody, with little training material. ... Quantum computing will play key role in providing the required infrastructure support implementation of algorithms which will be able to identify such fake videos and stop them before they become viral and create law & order problems in societies. Players like Facebook (including WhatsApp), Instagram have strong usage requirements of quantum computing to address the menace of fake news and fake videos.


7 steps for turning shadow IT into a competitive edge

A formalized and transparent prioritization process is also important. CIOs need a way to capture lightweight business cases or forecast business value to help prioritize new opportunities. At the same time, CIOs, CISOs, and compliance officers need to establish a risk management framework to quantify when shadow IT creates business issues or significant risks. CIOs should partner with CFOs in this endeavor because when departments procure their own technologies without IT, there are often higher procurement costs and implementation risks. CIOs should also elicit their enterprise architect’s guidance on where reusable platforms and common services yield cost and other business benefits. “Shadow IT often wastes resources by not generating documentation for software that would make it reusable,” says Anant Adya, EVP at Infosys Cobalt. “Insightful and far-reaching governance coupled with detailed application privileges discourage shadow IT and helps build collaborative operating models.” Creating technology procurement controls that require CIO and CISO collaboration on technology spending is an important step to reduce shadow IT.


Running Automation Tests at Scale Using Cypress

With Cypress, teams can easily create web automation tests, debug them visually, and automatically run them in the CI/CD pipelines, thus helping with Continuous Integration (CI) and development. Though Cypress is often compared with Selenium WebDriver, it is fundamentally and architecturally different. It does not use Selenium WebDriver for automation testing. Thus enabling users to write faster, easier, and more reliable tests. The installation and setup part of Cypress is also easier compared to other test automation frameworks as it is a node package. You just need to run the npm command npm install cypress and then use the Cypress framework. ... Cypress offers some out-of-box features to run automation tests at scale. Travel:- Cypress allows you to "time travel" through the web application, meaning it will check out what happens at each step when the tests are executed. We can step forward, backward, and even pause the test execution on run time. Thus providing us the flexibility to inspect the application’s state in real-time test execution. Auto Waits:- Cypress has the inbuilt auto wait functionality that automatically waits for the command and assertion before moving to the next step.



Quote for the day:

"Success is how high you bounce when you hit bottom." -- Gen. George Patton

Daily Tech Digest - November 20, 2023

6 most underhyped technologies in IT — plus one that’s not dead yet

Although AI gets all the attention, the key components that make it work often do not, including data. Yet as organizations eagerly embrace AI in all its forms, many have neglected parts of their data management needs, says Laura Hemenway, president, founder, and principle of Paradigm Solutions, which supports large enterprise-wide transformations. Even those who are on top of data management often downplay the powerful work their data management tools do. As such, Hemenway thinks data management software deserves more recognition for the important job it does, even as the work involved is often considered a tedious task that doesn’t have the pizzazz of making the most of ChatGPT. Still, sound data management is a linchpin for AI and other analytics work, which underpins a whole host of processes deemed critical in modern business  ... But with no big breakthroughs, interest fizzled and the metaverse found itself on some overhyped tech lists. But don’t be so quick to write it off, warns Taylor, who thinks this category of tech has been unfairly downgraded, which lands it on his list of underhyped technologies.


How to Sell A Technical Debt From a DevOps Perspective?

In the course of my journey, I have formed 3 categories of business motivation for "buying" technical debt: "Fat" indifference - When there's a rich investor, the CEO can afford the development team of weird geeks. It is like, "Well, let them do it! The main thing is to get the product done, and the team spirit is wow, everything is cool, and we'd be the best office in the world". ... Fear - This is one of the most effective, widespread, and efficient models for technical debt. What kind of "want" can we talk about here when it's scary? It's about when something happens like a client left because of a failure or a hack. And it's all because of low quality, brakes, or something else. But bluntly selling through fear is also bad. Speculation with fear works against trust. You need to sell carefully and as honestly as possible. Trust - It is when a business gives you as much time as you need to work on technical debt. Trust works and is preserved only when you are carefully small to the total share and as pragmatic as possible in taking this time. Otherwise, trust is destroyed. Moreover, it does not accumulate. A constant process goes in waves: trust increases and then fades.


Defending Logistics After Cyberattack on DP World Australia

Ransomware is obviously more than a pricey nuisance for companies. “The costs are like millions of dollars for each attack,” Austin says. While businesses often acknowledge that supply chain security and data protection are important priorities, there can be challenges acting on those fronts. “The problem is a lot of them suffer from understaffing,” he says. “They don’t have enough people and logistics, and so they’re struggling with that.” There is a presumption of smooth operations across the supply chain but cyberattacks and other disruptions can deliver wakeup calls. “Prior to the pandemic, a number of companies never realized how important a well-functioning supply chain is, how much they matter,” Austin says. The rise of the pandemic saw cargo getting backed up at various ports around the world, disrupting access and delivery of goods. The cyberattack on DP World Australia was a reminder that intentional targeting by bad actors can also put the supply chain in a chokehold. It is debatable how disconnecting and then later reconnecting to the internet affected the situation DP World Australia faced.


Only 9% of IT budgets are dedicated to security

With rising risk and shrinking resources, the message is clear: businesses need new methods to improve their security. Compounding the urgency is ever-evolving global regulation and the growing time-suck of complying with an increasing number of standards. Organizations are at an impasse in an environment where customers want more insight into a company’s security practices. Two-thirds say that customers, investors and suppliers are increasingly seeking proof of security and compliance. While 41% provide internal audit reports, 37% third-party audits, and 36% complete security questionnaires, 12% admit they don’t or can’t provide evidence when asked. That means companies worldwide are falling at the very first hurdle – costing them potential revenue and growth opportunities in new markets. Businesses spend an average of 7.5 hours per week – more than 9 working weeks a year – on achieving security compliance or staying compliant. 54% are concerned that secure data management is becoming more challenging with AI adoption with 51% saying that using generative AI could erode customer trust.


The Power of Preference in the Wake of Privacy Regulations

Providing customers with autonomy to dictate their own data-sharing preferences isn’t just a legal obligation; it’s also a key way to improve trust, establish transparency and strengthen brand loyalty. Additionally, teams can use this highly personalized data to tailor their marketing efforts, so they’re only serving up content and communications that are the most relevant to individual customers. As such, business leaders shouldn’t feel hindered or restricted by legal requirements like Law 25. Instead, it should challenge businesses to consider this renewed emphasis on consumer autonomy as a positive development. This is especially true for companies that deal with our most sensitive data (i.e. financial and health information). Beyond these updated privacy regulations, financial services and healthcare providers could face serious legal repercussions if customer and patient information is obtained without consent or ends up in the wrong hands. Developing a consumer-centric strategy anchored on up-to-date preferences is therefore an absolute necessity.


How To Attract Premium Clients And Charge Accordingly — Even During Market Instability

In the e-commerce marketing world, we often hear that we need to speak to the client's pain points — to amplify that fear so people are motivated to buy — but we advise against using this common method. If we are constantly speaking to that disillusioned version of our client, it takes us longer to scale a business. It means we have to drag, convince and educate. Instead, elevate the quality of clients you are attracting. Avoid using fear-based marketing to sell. Out of our sample size of 300-plus clients in a variety of sectors, just by shifting the language, the quality of the client improved 100% of the time. The future of marketing is to speak to the empowered version of your client because today's consumer is more sophisticated than ever. When you talk to that client, you're attracting clients who are resourceful and willing to bet on themselves and see their value. You'll elevate the type of clients you attract, and they're willing to invest more. ... Price the services you offer based on the value you bring to the table, specifically on the lifetime value that it will provide to the client. 


Powering a Greener Future: How Data Centers Can Slash Emissions

As data and analytics have inarguably become the fuel of business success, the rise of data centers is outpacing our ability to mitigate the resultant carbon emissions. If data industry leaders don’t seek new methods of carbon reduction and embrace more energy-efficient processing, the costs will quickly become insurmountable. Thankfully, companies are increasingly setting specific carbon emission targets, either because of their own environmental, social, and governance (ESG) goals or due to legal requirements or regulations. In fact, these targets may even be good for business. A recent McKinsey study found that companies with products with ESG-related claims saw 8% more cumulative growth than companies that did not associate their products with ESG. A recent poll of American consumers found that, despite inflation, 66% of consumers are willing to pay more for sustainable products and services. Many organizations already aim to have net-zero emissions by 2050, but most are focusing on alternative and renewable energies, which is good but insufficient because it misses the core of the problem: the overconsumption of energy in the data center due to misused infrastructure.


Three Causes of Cloud Migration Failure in Large Enterprises

Cloud migration is not a simple lift-and-shift operation; it involves myriad complexities that demand careful consideration. Underestimating these complexities is a significant pitfall that can lead to costly failures in large enterprise cloud migrations. Transferring vast amounts of data while ensuring seamless integration with existing systems is a significant challenge. Not all applications can seamlessly transition to the cloud. Some require considerable reconfiguration or redevelopment. Ensuring data security and compliance with regulatory standards is complex, with varying requirements across industries and regions. It can be intricate to optimize performance in the cloud, including network latency and resource allocation, and daunting to track and control cloud costs amid scalability and resource provisioning complexities. ... Employee resistance to change is a critical factor that can make or break a cloud migration initiative in large enterprises. In fact, industry leaders emphasize that employee resistance to change is the primary reason for enterprise cloud migration failures.


A Detection and Response Benchmark Designed for the Cloud

Operating in the cloud securely requires a new mindset. Cloud-native development and release processes pose unique challenges for threat detection and response. DevOps workflows — including code committed, built, and delivered for applications — involve new teams and roles as key players in the security program. Rather than the exploitation of traditional remote code execution vulnerabilities, cloud attacks focus more heavily on software supply chain compromise and identity abuse, both human and machine. Ephemeral workloads require augmented approaches to incident response and forensics. While identity and access management, vulnerability management, and other preventive controls are necessary in cloud environments, you cannot stay safe without a threat detection and response program to address zero-day exploits, insider threats, and other malicious behavior. It's impossible to prevent everything. The 5/5/5 benchmark challenges organizations to acknowledge the realities of modern attacks and to push their cloud security programs forward.


Are Business Continuity Plans Still Relevant?

The successful organizations focused on building teams that were adept at proactively responding to near and longer-term challenges. The less successful were reactionary, starting by executing procedures in plans that focused on short-term outcomes. Taken one step further, those organizations that really knew what it took to deliver products and services, how they reached their customers and suppliers, and the relationship between processes, resources, and third parties were able to better respond and prevent disruption or other forms of unacceptable impact. ... When you lack a full picture view of your business operations and go-to-market strategy, dependencies and interdependencies are often overlooked. Developing and maintaining a digital model of your organization, its products/services, and business processes offers a valuable resource to query. This digital model gives you an end-to-end perspective on your operations, which is invaluable for assessing vulnerabilities like identifying and treating critical single points of failure or those parts of the business without a recovery strategy, addressing change management, and making better business decisions.



Quote for the day:

"Positive thinking will let you do everything better than negative thinking will." -- Zig Ziglar