Daily Tech Digest - August 02, 2024

Small language models and open source are transforming AI

From an enterprise perspective, the advantages of embracing SLMs are multifaceted. These models allow businesses to scale their AI deployments cost-effectively, an essential consideration for startups and midsize enterprises that need to maximize their technology investments. Enhanced agility becomes a tangible benefit as shorter deployment times and easier customization align AI capabilities more closely with evolving business needs. Data privacy and sovereignty (perennial concerns in the enterprise world) are better addressed with SLMs hosted on-premises or within private clouds. This approach satisfies regulatory and compliance requirements while maintaining robust security. Additionally, the reduced energy consumption of SLMs supports corporate sustainability initiatives. That’s still important, right? The pivot to smaller language models, bolstered by open source innovation, reshapes how enterprises approach AI. By mitigating the cost and complexity of large generative AI systems, SLMs offer a viable, efficient, and customizable path forward. This shift enhances the business value of AI investments and supports sustainable and scalable growth. 


The Impact and Future of AI in Financial Services

Winston noted that AI systems require vast amounts of data, which raises concerns about data privacy and security. “Financial institutions must ensure compliance with regulations such as GDPR [General Data Protection Regulation] and CCPA [California Consumer Privacy Act] while safeguarding sensitive customer information,” he explained. Simply using general GenAI tools as a quick fix isn’t enough. “Financial services will need a solution built specifically for the industry and leverages deep data related to how the entire industry works,” said Kevin Green, COO of Hapax, a banking AI platform. “It’s easy for general GenAI tools to identify what changes are made to regulations, but if it does not understand how those changes impact an institution, it’s simply just an alert.” According to Green, the next wave of GenAI technologies should go beyond mere alerts; they must explain how regulatory changes affect institutions and outline actionable steps. As AI technology evolves, several emerging technologies could significantly transform the financial services industry. Ludwig pointed out that quantum computers, which can solve complex problems much faster than traditional computers, might revolutionize risk management, portfolio optimization, and fraud detection. 


Is Your Data AI-Ready?

Without proper data contextualization, AI systems may make incorrect assumptions or draw erroneous conclusions, undermining the reliability and value of the insights they generate. To avoid such pitfalls, focus on categorizing and classifying your data with the necessary metadata, such as timestamps, location information, document classification, and other relevant contextual details. This will enable your AI to properly understand the context of the data and generate meaningful, actionable insights. Additionally, integrating complementary data can significantly enhance the information’s value, depth, and usefulness for your AI systems to analyze. ... Although older data may be necessary for compliance or historical purposes, it may not be relevant or useful for your AI initiatives. Outdated information can burden your storage systems and compromise the validity of the AI-generated insights. Imagine an AI system analyzing a decade-old market report to inform critical business decisions—the insights would likely be outdated and misleading. That’s why establishing and implementing robust retention and archiving policies as part of your information life cycle management is critical. 


Generative AI: Good Or Bad News For Software

There are plenty of examples of breaches that started thanks to someone copying over code and not checking it thoroughly. Think back to the Heartbleed exploit, a security bug in a popular library that led to the exposure of hundreds of thousands of websites, servers and other devices which used the code. Because the library was so widely used, the thought was, of course, someone had checked it for vulnerabilities. But instead, the vulnerability persisted for years, quietly used by attackers to exploit vulnerable systems. This is the darker side to ChatGPT; attackers also have access to the tool. While OpenAI has built some safeguards to prevent it from answering questions regarding problematic subjects like code injection, the CyberArk Labs team has already uncovered some ways in which the tool could be used for malicious reasons. Breaches have occurred due to blindly incorporating code without thorough verification. Attackers can exploit ChatGPT, using its capabilities to create polymorphic malware or produce malicious code more rapidly. Even with safeguards, developers must exercise caution. ChatGPT generates the code, but developers are accountable for it


FinOps Can Turn IT Cost Centers Into a Value Driver

Once FinOps has been successfully implemented within an organization, teams can begin to automate the practice while building a culture of continuous improvement. Leaders can now better forecast and plan, leading to more precise budgeting. Additionally, GenAI can provide unique insights into seasonality. For example, if a resource demand spikes every three days at other unpredictable frequencies, AI can help you detect these patterns so you can optimize by scaling up when required and back down to save costs during lulls in demand. This kind of pattern detection is difficult without AI. It all goes back to the concept of understanding value and total cost. With FinOps, IT leaders can demonstrate exactly what they spend on and why. They can point out how the budget for software licenses and labor is directly tied to IT operations outcomes, translating into greater resiliency and higher customer satisfaction. They can prove that they’ve spent money responsibly and that they should retain that level of funding because it makes the business run better. FinOps and AI advancements allow businesses to do more and go further than they ever could. Almost 65% of CFOs are integrating AI into their strategy. 


The convergence of human and machine in transforming business

To achieve a true collaboration between humans and machines, it is crucial to establish a clear understanding and definition of their respective roles. By emphasizing the unique strengths of AI while strategically addressing its limitations, organizations can create a synergy that maximizes the potential of both human expertise and machine capabilities. AI excels in data structuring, capable of transforming complex, unstructured information into easily searchable and accessible content. This makes it an invaluable tool for sorting through vast online datasets, including datasets, news articles, academic reports and other forms of digital content, extracting meaningful insights. Moreover, AI systems operate tirelessly, functioning 24/7 without the need for breaks or downtime. This "always on" nature ensures a constant state of productivity and responsiveness, enabling organizations to keep pace with the rapidly changing market. Another key strength of AI lies in its scalability. As data volumes continue to grow and the complexity of tasks increases, AI can be integrated into existing workflows and systems, allowing businesses to process and analyze vast amounts of information efficiently.


The Crucial Role of Real-time Analytics in Modern SOCs

Security analysts often spend considerable time manually correlating diverse data sources to understand the context of specific alerts. This process leads to inefficiency, as they must scan various sources, determine if an alert is genuine or a false positive, assess its priority, and evaluate its potential impact on the organization. This tedious and lengthy process can lead to analyst burnout, negatively impacting SOC performance. ... Traditional Security Information and Event Management (SIEM) systems in SOCs struggle to effectively track and analyze sophisticated cybersecurity threats. These legacy systems often burden SOC teams with false positives and negatives. Their generalized approach to analytics can create vulnerabilities and strain SOC resources, requiring additional staff to address even a single false positive. In contrast, real-time analytics or analytics-driven SIEMs offer superior context for security alerts, sending only genuine threats to security teams. ... Staying ahead of potential threats is crucial for organizations in today's landscape. Real-time threat intelligence plays a vital role in proactively detecting threats. Through continuous monitoring of various threat vectors, it can identify and stop suspicious activities or anomalies before they cause harm.


Architecting with AI

Every project is different, and understanding the differences between projects is all about context. Do we have documentation of thousands of corporate IT projects that we would need to train an AI to understand context? Some of that documentation probably exists, but it's almost all proprietary. Even that's optimistic; a lot of the documentation we would need was never captured and may never have been expressed. Another issue in software design is breaking larger tasks up into smaller components. That may be the biggest theme of the history of software design. AI is already useful for refactoring source code. But the issues change when we consider AI as a component of a software system. The code used to implement AI is usually surprisingly small — that's not an issue. However, take a step back and ask why we want software to be composed of small, modular components. Small isn't "good" in and of itself. ... Small components reduce risk: it's easier to understand an individual class or microservice than a multi-million line monolith. There's a well-known paper(link is external) that shows a small box, representing a model. The box is surrounded by many other boxes that represent other software components: data pipelines, storage, user interfaces, you name it. 


Hungry for resources, AI redefines the data center calculus

With data centers near capacity in the US, there’s a critical need for organizations to consider hardware upgrades, he adds. The shortage is exacerbated because AI and machine learning workloads will require modern hardware. “Modern hardware provides enhanced performance, reliability, and security features, crucial for maintaining a competitive edge and ensuring data integrity,” Warman says. “High-performance hardware can support more workloads in less space, addressing the capacity constraints faced by many data centers.” The demands of AI make for a compelling reason to consider hardware upgrades, adds Rob Clark, president and CTO at AI tool provider Seekr. Organizations considering new hardware should pull the trigger based on factors beyond space considerations, such as price and performance, new features, and the age of existing hardware, he says. Older GPUs are a prime target for replacement in the AI era, as memory per card and performance per chip increases, Clark adds. “It is more efficient to have fewer, larger cards processing AI workloads,” he says. While AI is driving the demand for data center expansion and hardware upgrades, it can also be part of the solution, says Timothy Bates, a professor in the University of Michigan College of Innovation and Technology. 


How to Bake Security into Platform Engineering

A key challenge for platform engineers is modernizing legacy applications, which include security holes. “Platform engineers and CIOs have a responsibility to modernize by bridging the gap between the old and new and understanding the security implications between the old and new,” he says. When securing the software development lifecycle, organizations should secure both continuous integration and continuous delivery/continuous deployment pipelines as well as the software supply chain, Mercer says. Securing applications entails “integrating security into the CI/CD pipelines in a seamless manner that does not create unnecessary friction for developers,” he says. In addition, organizations must prioritize educating employees on how to secure applications and software supply chains. ... As part of baking security into the software development process, security responsibility shifts from the cybersecurity team to the development organization. That means security becomes as much a part of deliverables as quality or safety, Montenegro says. “We see an increasing number of organizations adopting a security mindset within their engineering teams where the responsibility for product security lies with engineering, not the security team,” he says.



Quote for the day:

“If you really want to do something, you will work hard for it.” -- Edmund Hillary

Daily Tech Digest - August 01, 2024

These are the skills you need to get hired in tech

While soft skills are important, communicating them to a prospective employer can present a conundrum. Tina Wang, division vice president of human resources at ADP, said there are a few ways for job seekers to bring attention to their behavioral skills. It goes beyond just listing “strong work ethic” or “problem solving” on a resume, “though it’s good to add it there too,” she said. Job seekers can incorporate behavior skills in a track record of job experiences. ... An interview with a prospective employer is also a good time to introduce behavioral skills, but time is limited and job-seekers won’t likely be able to share all their demonstrated skills and experience. “Preparation will go a long way, so think through your talking points and what is important to share,” Wang said. “Think about a few applicable, real work experiences where you demonstrated these skills and sketch out how and when to bring them during the interview process.” References can also be an excellent way to highlight behavioral skills. Intangibles such as a strong work ethic or attention to detail might be something former managers, team members or peers identify. 


Ideal authentication solution boils down to using best tools to stop attacks

Given the shifting nature of work, with more employees working remotely, the variety of gaps in protection is manifold. Clunky authentication experiences mean users are often asked to sign in multiple times a day for different applications and accounts. “Users get extremely frustrated when this occurs, and they end up having resistance to adopting these authentication methods,” Anderson says. To improve the situation, organizations need to manage authentication scenarios in onboarding, session tokens to remember login – and the reality of username and password authentication still being used extensively throughout the security landscape, leaving vulnerabilities to fraud. “Passkeys are good for users because they simplify and streamline the actual authentication ceremony itself, where the user is actively involved,” Miller says. “It doesn’t necessarily decrease the number of times they have to authenticate but it does make it simpler and less taxing.” “They also have knock-on benefits of reducing the amount of information that leaks in the case of a database leak that can be used by an attacker. It shrinks the blast radius of account compromise.”


Should Today’s Developers Be More or Less Specialized?

“The need for specialists is not going to change. If anything, I expect it to increase,” says Hillion. “We still have a number of clients who rely on full-stack developers. I would say the general trend is towards businesses needing more specialized developers who have the right combination of technical skillsets and sector knowledge to deliver what is needed into the complex tech stack. There is significant demand for developers who specialize in particular industry sectors.” ... “Without basic knowledge, pursuing any specific development area is challenging,” says Ivanov. “That’s why starting by mastering basic technologies that someone is most proficient in, which helps them learn new things faster,” says Ivanov in an email interview. “However, core technologies should not be the end goal. It is also essential to stay up to date with technology trends and always continue using new technology.” Tasks that go beyond standard or general requirements need the involvement of specialists who have knowledge and experience in specific areas. For example, a project that requires complex algorithms or specific technologies will require a specialist with a deep understanding of them.


Between sustainability and risk: why CIOs are considering small language models

“In LLMs, the bulk of the data work is done statistically and then IT trains the model on specific topics to correct errors, giving it targeted quality data,” he says. “SLMs cost much less and require less data, but, precisely for this reason, the statistical calculation is less effective and, therefore, very high-quality data is needed, with substantial work by data scientists. Otherwise, with generic data, the model risks producing many errors.” Furthermore, SLMs are so promising and interesting for companies that even big tech offers and advertises them, like Google’s Gemma and Microsoft’s Phi-3. For this reason, according to Esposito, governance remains fundamental, within a model that should remain a closed system. “An SLM is easier to manage and becomes an important asset for the company in order to extract added value from AI,” he says. “Otherwise, with large models and open systems, you have to agree to share strategic company information with Google, Microsoft, and OpenAI. This is why I prefer to work with a system integrator that can develop customizations and provide a closed system, for internal use. 


Why geographical diversity is critical to build effective and safe AI tools

Geographical diversity is critical as organizations look to develop AI tools that can be adopted worldwide, according to Andrea Phua, senior director of the national AI group and director of the digital economy office at Singapore's Ministry of Digital Development and Information (MDDI). ... "The use of Gen AI has brought a new dimension to cyber threats. As AI becomes more accessible and sophisticated, threat actors will also become better at exploiting it," said CSA's chief executive and Commissioner of Cybersecurity David Koh. "As it is, AI already poses a formidable challenge for governments around the world [and] cybersecurity professionals would know that we are merely scratching the surface of gen AI's potential, both for legitimate applications and malicious uses," Koh said. He pointed to reports of AI-generated content, including deepfakes in video clips and memes, that have been used to sow discord and influence the outcome of national elections. At the same time, there are new opportunities for AI to be tapped to enhance cyber resilience and defense, he said. 


Cloud Migration Regrets: Should You Repatriate?

With increasing pressure to cut costs, many CTOs and CIOs are considering repatriating cloud workloads back on premises. As hard as it may seem, it’s important to think beyond just the cost. You must understand workload requirements to make sound decisions for each application. ... A lot of organizations have forgotten how much IT operations have changed since moving to the cloud. Cloud transformation meant revamping ITOps based on the chosen mix of Infrastructure-, Platform- or Software-as-a-Service (IaaS, PaaS or SaaS) services. Bringing applications back on premises strips away those service layers, and Ops teams may no longer be able or willing to accept the administrative and maintenance burden again. One final consideration before moving workloads off the cloud is security. I think security is one of the many advantages of cloud infrastructure. When businesses first started moving to the cloud, security was one of the biggest concerns. It turns out that cloud providers are better at security than you are. They can’t fix security holes in your software or other operator error scenarios, but a cloud infrastructure provides greater isolation if a breach does occur. 


Chess, AI & future of leadership

As computing power increases and its access cost reduces, AI will become the central force that drives all activities, including imagination! So, imagine the chessboard being AI-enabled. The board now has its intelligence with the ability to understand the context of the game to prompt the next set of moves. The difference between the board-level AI and the AI used by the player as her assistant is that the assistant knows the player’s psyche of defending or attacking, strengths and weaknesses of the player and her opponent, and factors these while offering suggestions. The two AIs may or may not be aligned in their suggestions since both may be accessing different references. Let’s activate the third dimension in chess – the pieces are also intelligent! They know their roles and those of the others. They too can think, strategise, and suggest. For instance, in a choice to move between the rook and the knight, the rook suggests the knight moves. The knight feels the Queen should move! This is the egalitarian version of chess! Does it feel real and practical? In the context of AI, there’s the Large Language Model, which processes data from a vast set of sources with a large number of constraints and rules. 


DigiCert validation bug sets up 83,267 SSL certs for revoking

One of the validation methods approved by the Certification Authority Browser Forum (CABF), whose guidelines provide best practices for securing internet transactions in browsers and other software, involves the customer adding a DNS CNAME record that includes a random value supplied by its certificate provider. The provider, in this case DigiCert, then does a DNS lookup and verifies that the random value is as provided, confirming that the customer controls the domain. The CABF requires that, in one format of the DNS CNAME entry, the random value be prefixed with an underscore, and DigiCert discovered that, in some cases, that character was not included, rendering the validation non-compliant. By CABF rules, those certificates must be revoked within 24 hours, with no exceptions. However, DigiCert said in an update to its status page Tuesday, and in an email to customers, “Unfortunately, some customers operating critical infrastructure are not in a position to have all their certificates reissued and deployed in time without critical service interruptions. To avoid disruption to critical services, we have engaged with browser representatives alongside these customers over the last several hours. ...”


Mind the Gap: Data Quality Is Not “Fit for Purpose”

When talking about data quality, we must therefore be clear about whose purpose, what requirements, established when, and by whom. Within the context of the DMBoK definition, the answer is that every consumer evaluates the quality of a data set independently. Data is considered to be of high quality when it is fit for my purpose, satisfies my requirements, established by me when I need the data. Data quality, defined in this way, is truly in the eye of the beholder. Furthermore, data quality analyses cannot be leveraged by new consumers. For decades, we in decision support have been selling the benefits of leveraging data across applications and analyses. It has been the fundamental justification for data warehouses, data lakes, data lakehouses, etc. But misalignment between the purpose for which data was created and the purpose for which it is being used may not be immediately apparent. Especially when the data is not well understood. The consequences are faulty models and erroneous analyses. We reflexively blame the quality of the data, but that’s not where the problem lies. This is not data quality. It is data fitness. 


Navigating Hope and Fear in a Socio-Technical Future

It is not about just spending more, that isnt really working, you must SPEND BETTER. I and other architects litterally train for decades to both cut costs and make great investment decisions. Technical debt acrual, technical health goals, technical strategy dont just deserve a seat at the table. They are becoming the table. A little more rationally, in all complex engineering fields, we are required to get signoff from legitimate professionals who have been measured against legitimate and hard-earned competencies. Not only does this create more stable outcomes, it actually saves and makes the economy money. Instead of ‘paying for two ok systems’, we pay for ‘one great one’. ... In all complex engineering ecosystems it is not just outputs and companies that are regulated. The role and skills of architects and engineers are not secret and they really aren’t that different by company. I believe I am the worlds expert on architecture skills or at least one of a dozen of them. I have interviewed and assessed hundreds of companies, and thousands of architects. It is time to begin licensing. And it must be handed to a real professional society. It cannot be a vendor consortium. 



Quote for the day:

"You’ll never achieve real success unless you like what you’re doing." -- Dale Carnegie

Daily Tech Digest - July 31, 2024

Rise of Smart Testing: How AI is Revolutionizing Software Testing

Development teams must use new techniques when creating and testing applications. Although traditional frameworks frequently necessitate a great deal of manual labor for script construction and maintenance, test automation can greatly increase productivity and accuracy. This may restrict their efficacy and capacity to grow. ... Agile development is based on continuous improvement, but rapid code changes can put a burden on conventional test automation techniques. This is where self-healing test automation scripts come in. The release cycle is slowed down by test cases that become delicate and need ongoing maintenance. Frameworks with AI capabilities are able to recognize these changes and adjust accordingly. This translates into shorter release cycles, less maintenance overhead, and self-healing test scripts. ... Extensive test coverage is a difficult goal to accomplish using traditional testing techniques. Artificial intelligence (AI) fills this gap by automatically generating a wide range of test cases by evaluating requirements, code, and previous tests. This covers scenarios—both good and negative—that human testers might overlook or edge cases. 


What CISOs need to keep CEOs (and themselves) out of jail

Considering the changes in the Cyber Security Framework 2.0 (CSF 2.0) emphasizing governance and communication with the board of directors, Sullivan is right to assume that liability will not stop at the CISO and will likely move upwards. In his essay, Sullivan urges CEOs to give CISOs greater resources to do their jobs. But if he’s talking about funding to purchase more security controls, this might be a hard sell for CEOs. ... CEOs would benefit from showing that they care about cybersecurity and adding metrics to company reports to demonstrate it is a significant concern. For CISOs, agreeing to a set of metrics with the CEO would provide a visible North Star and a forcing function for aligning resources and headcount to ensure metrics continue to trend in the right direction. ... CEOs that are serious about cybersecurity must prioritize collaboration with their CISOs and putting them in the rotation for regular meetings. A healthy budget increase for tools may be necessary as AI injects many new risks, but it’s not sufficient nor is it the most important step. CISOs need better people and better processes to deliver on promises of keeping the enterprise safe. 


Who should own cloud costs?

The exponential growth of AI and generative AI initiatives are often identified as the true culprits. Although packed with potential, these advanced technologies consume extensive cloud resources, increasing costs that organizations often struggle to manage effectively. The main issues usually stem from a lack of visibility and control over these expenses. The problems go beyond just tossing around the term “finops” at meetings. It comes down to a fundamental understanding of who owns and controls cloud costs in the organization. Trying to identify cloud cost ownership and control often becomes a confusing free-for-all. ... Why does giving engineering control over cloud costs make such a difference? For one, engineers are typically closer to the actual usage and deployment of cloud resources. When they build something to run on the cloud, they are more aware of how applications and data storage systems use cloud resources. Engineers can quickly identify and rectify inefficiencies, ensuring that cloud resources are used cost-effectively. Moreover, engineers with skin in the game are more likely to align their projects with broader business goals, translating technical decisions into tangible business outcomes.


Generative AI and Observability – How to Know What Good Looks Like

In software development circles, observability is often defined as the combination of logs, traces and metric data to show how applications perform. In classic mechanical engineering and control theory, observability looks at the inputs and outputs for a system to judge how changes affect the results. In practice, looking at the initial requests and what gets returned provides data that can be used for judging performance. Alongside this, there is the quality of the output to consider as well. Did the result answer the user’s question, and how accurate was the answer? Were there any hallucinations in the response that would affect the user? And where did those results come from? Tracking AI hallucination rates across different LLMs and services shows up how those services perform, where the levels of inaccuracy vary from around 2.5 percent to 22.4 percent. All the steps involved around managing your data and generative AI app can affect the quality and speed of response at runtime. For example, retrieval augmented generation (RAG) allows you to find and deliver company data in the right format to the LLM so that this context can provide a more relevant response. 


Security platforms offer help to tame product complexity, but skepticism remains

The biggest issue enterprises cited was what they saw as an inherent contradiction between the notion of a platform, which to them had the connotation of a framework on which things were built, and the specialization of most offerings. “You can’t have five foundations for one building,” one CSO said sourly, and pointed out that there are platforms for network, cloud, data center, application, and probably even physical security. While there was an enterprise hope that platforms would somehow unify security, they actually seemed to divide it. ... It seems to me that divided security responsibility, arising from the lack of a single CSO in charge, is also a factor in the platform question. Vendors who sell into such an account not only have less incentive to promote a unifying security platform vision, they may have a direct motivation not to do that. Of 181 enterprises, 47 admit that their security portfolio was created, and is sustained, by two or more organizations, and every enterprise in this group is without a CSO. Who would a security platform provider call on in these situations? Would any of the organizations involved in security want to share their decision power with another group?


The cost of a data breach continues to escalate

The type of attack influenced the financial damage, the report noted. Destructive attacks, in which the bad actors delete data and destroy systems, cost the most: $5.68 million per breach ($5.23 million in 2023). Data exfiltration, in which data is stolen, and ransomware, in which data is encrypted and a ransom demanded, came second and third, at $5.21 million and $4.91 million respectively. However, noted Fritz Jean-Louis, principal cybersecurity advisor at Info-Tech Research Group, sometimes attackers combine their tactics. “Double extortion ransomware attacks are a key factor that is influencing the cost of data breaches,” he said in an email. “Since 2023, we have observed that ransomware attacks now include double extortion attacks ... “This risk of shadow data will become even more elevated in the AI era, with data serving as the foundation on which new AI-powered applications and use-cases are being built,” added Jennifer Kady, vice president, security at IBM. “Gaining control and visibility over shadow data from a security perspective has emerged as a top priority as companies move quickly to adopt generative AI, while also ensuring security and privacy are at the forefront.”


If You are Reachable, You Are Breachable, and Firewalls & VPNs are the Front Door

It’s about understanding that the network is no longer a castle to be fortified but a conduit only, with entity-to-entity access authorized discretely for every connection based on business policies informed by the identity and context of the entities connecting. Gone are IP-based policies and ACLs, persistent tunnels, trusted and untrusted zones, and implicit trust. With a zero-trust architecture in place, the internet becomes the corporate network and point-to-point networking fades in relevance over time. Firewalls become like the mainframe – serving a diminishing set of legacy functions – and no longer hindering the agility of a mobile and cloud-driven enterprise. This shift is not just a technical necessity but also a regulatory and compliance imperative. With government bodies mandating zero-trust models and new SEC regulations requiring breach reporting, warning shots have been fired. Cybersecurity is no longer just an IT issue; it has elevated to a boardroom priority, with far-reaching implications for business continuity and reputation. Many access control solutions have claimed to adopt zero-trust by adding dynamic trust. 


Indian construction industry leads digital transformation in Asia pacific

“While challenges like the increasing prices of raw materials and growing competition persist in the Indian market, its current strong economic state and steady outlook for the forthcoming years, as reported by the IMF, have provided a congenial atmosphere for businesses to evaluate and adopt newer technologies, and consequently lead the Asia Pacific market in terms of investments in transformational technologies. Indian businesses have aptly recognised this phase as the ideal time to leverage digital technologies to identify newer growth pockets, usher in efficiencies throughout project lifecycles and give them a competitive edge,” said Sumit Oberoi, Senior Industry Strategist, Asia Pacific at Autodesk. “Priority areas for construction businesses to improve digital adoption include starting small, selecting a digital champion, tracking a range of success measures, and asking whether your business is AI ready.” he added. ... David Rumbens, Partner at Deloitte Access Economics, said, “The Indian construction sector, fuelled by a surge in demand for affordable housing as well as supportive government policies to boost urban infrastructure, is poised to make a strong contribution as India’s economy grows by 6.9% over the next year


Recovering from CrowdStrike, Prepping for the Next Incident

In the future, organizations could consider whether outside factors make a potential software acquisition riskier, Sayers said. A product widely used by Fortune 100 companies, for example, has the added risk of being an attractive target to attackers hoping to hit many such victims in a single attack. “There is a soft underbelly in the global IT world, where you can have instances where a particular piece of software or a particular vendor is so heavily relied upon that they themselves could potentially become a target in the future,” Sayers said. Organizations also need to identify any single points of failure in their environments — instances where they rely on an IT solution whose disruption, whether deliberate or accidental, could disrupt their whole organization. When one is identified, they need to begin planning around the risks and looking for backup processes. Sayers noted that some types of resiliency measures may be too expensive for most organizations to adopt; some entities are already priced out of just backing up all their data and many would be unable to afford maintaining backup, alternate IT infrastructure to which they could roll over.


AI And Security: It Is Complicated But Doesn't Need To Be

While AI may present a potential risk for companies, it could also be part of the solution. As AI processes information differently from humans, it can look at issues differently and come up with breakthrough solutions. For example, AI produces better algorithms and can solve mathematical problems that humans have struggled with for many years. As such, when it comes to information security, algorithms are king and AI, Machine Learning (ML) or a similar cognitive computing technology, could come up with a way to secure data. This is a real benefit of AI as it can not only identify and sort massive amounts of information, but it can identify patterns allowing organisations to see things that they never noticed before. This brings a whole new element to information security. ... As these solutions will bring benefits to the workplace, companies may consider putting non-sensitive data into systems to limit exposure of internal data sets while driving efficiency across the organisation. However, organisations need to realise that they can’t have it both ways, and data they put into such systems will not remain private.



Quote for the day:

“When we give ourselves permission to fail, we, at the same time, give ourselves permission to excel.” -- Eloise Ristad

Daily Tech Digest - July 30, 2024

Cyber security and Compliance: The Convergence of Regtech Solutions

While cybersecurity, in itself, is an area that requires significant resources to ensure compliance, a business organisation needs to deal with numerous other regulations. The business regulatory ecosystem is made up of over 1,500 acts and rules and more than 69,000 compliances. As such, each enterprise needs to figure out the regulatory requirements applicable to their business. The complexity of the compliance framework is such that businesses are often lagging behind their compliance timelines. Take, for instance, a single-entity MSME with a single-state operation involved in manufacturing automotive components. Even such an operation requires the employer to keep up with 624 unique compliances. These requirements can reach close to 1,000 for a pharmaceutical enterprise. Persisting with manual compliance methods while technology has taken over every other business operation has become the root cause of delays, lapses, and defaults. While businesses are investing in the best possible technological solutions for cybersecurity issues, they are disregarding the impact of technology on their compliance functions.


Millions of Websites Susceptible to XSS Attack via OAuth Implementation Flaw

Essentially, the ‘attack’ requires only a crafted link to Google (mimicking a HotJar social login attempt but requesting a ‘code token’ rather than simple ‘code’ response to prevent HotJar consuming the once-only code); and a social engineering method to persuade the victim to click the link and start the attack (with the code being delivered to the attacker). This is the basis of the attack: a false link (but it’s one that appears legitimate), persuading the victim to click the link, and receipt of an actionable log-in code. “Once the attacker has a victim’s code, they can start a new login flow in HotJar but replace their code with the victim code – leading to a full account takeover,” reports Salt Labs. The vulnerability is not in OAuth, but in the way in which OAuth is implemented by many websites. Fully secure implementation requires extra effort that most websites simply don’t realize and enact, or simply don’t have the in-house skills to do so. From its own investigations, Salt Labs believes that there are likely millions of vulnerable websites around the world. The scale is too great for the firm to investigate and notify everyone individually. 


How to Build a High-Performance Analytics Team

The first approach, which he called the “artisan model,” involves building a small team of highly experienced (and highly paid) data scientists. Such skilled and capable team members can generally tackle all aspects of solving a business problem, from subject matter expert engagement to hypothesis testing, production, and iteration. The “factory approach,” on the other hand, resembles more of an assembly line, with a large group of people divvying up tasks based on their areas of expertise: some working on the business problem definition, others handling data acquisition, and so on. This second approach requires hiring more people than the first approach, but the pay differential between the two types of team members is significant enough that the two approaches cost roughly the same. ... An analytics team needs to grow and evolve to survive, and management must treat its staff accordingly. “Data scientists are some of the most sought-after talent in the economy right now,” Thompson stressed, “So I’m working every day to make sure that my team is happy and that they’re getting work they’re interested in ­– that they’re being paid well and treated well.”


Securing remote access to mission-critical OT assets

The two biggest challenges around securing remote access to mission-critical OT assets are different depending on whether it’s a user or machine that needs to connect to the OT asset. In terms of user access, the fundamental challenge is that the cyber security team doesn’t know what the assets are, and who the users are. That’s where the knowledge of the OT engineers – coupled with an inventory of the assets comes into play. The security team can leverage the inventory, experience, and knowledge of the OT engineers to operate as the “first line of defense” to stand up the organizational defenses. With respect to machine-to-machines access organizations typically don’t have an understanding of what “known good” traffic should look like between these assets. Without this understanding knowledge, it’s impossible to spot the anomalies from the baseline. That’s where a good cyber-physical system protection platform comes into play, providing the ability to understand the typical communication patterns that can eventually be operationalized in network segmentation rules to ensure effective security.


CrowdStrike debacle underscores importance of having a plan

To CrowdStrike’s credit, as well as its many partners and the CISO/InfoSec community at large, a lot of oil was burned in the initial days after the faulty update was transmitted as the community collectively jumped in and lent a hand to mitigate the situation. ... “Moving forward, this outage demonstrates that continuous preparation to fortify defenses is vital, especially before outages occur,” Christine Gadsby, CISO at Blackberry, opined. She continued, “Already understanding what areas are most vulnerable within a system prevents a panicked reaction when something looks amiss and makes it more difficult for hackers to wreak havoc. In a crisis, defense is the best offense; the value of confidence that comes with preparation cannot be underestimated.” ... CISOs should also review what needs to be changed, included, or deleted from their emergency response and business continuity playbooks. ... Now is the time for each CISO to do a bit of introspection on their team’s ability to address a similar scenario, and plan, exercise, and be prepared for the unexpected. Which could happen today, tomorrow, or hopefully never.


How Searchable Encryption Changes the Data Security Game

Organizations know they must encrypt their most valuable, sensitive data to prevent data theft and breaches. They also understand that organizational data exists to be used. To be searched, viewed, and modified to keep businesses running. Unfortunately, our Network and Data Security Engineers were taught for decades that you just can't search or edit data while in an encrypted state. ... So why, now, is Searchable Encryption suddenly becoming a gold standard in critical private, sensitive, and controlled data security? According to Gartner, "The need to protect data confidentiality and maintain data utility is a top concern for data analytics and privacy teams working with large amounts of data. The ability to encrypt data, and still process it securely is considered the holy grail of data protection." Previously, the possibility of data-in-use encryption revolved around the promise of Homomorphic Encryption (HE), which has notoriously slow performance, is really expensive, and requires an obscene amount of processing power. However, with the use of Searchable Symmetric Encryption technology, we can process "data in use" while it remains encrypted and maintain near real-time, millisecond query performance.


How Cloud-Based Solutions Help Farmers Improve Every Season

At the start of each growing season, farmers can use previous years’ data to strategically plan where and when to plant seeds, identifying the areas of the field where plants often grow strongly or are typically not as prosperous. From there, planters equipped with robotics, sensors, and camera vision, augmented with field boundaries, guidance lines, and other data provided from the cloud, can precisely place hundreds of seeds per second at an optimal depth and with optimal spacing, avoiding losses from seeds being planted too shallow, deep, or close to another plant. ... Advanced machines gather a wide range of data to support the next step of nurturing plant growth. That data is critical, because while plants are growing, so are weeds. And weeds need to be treated in a timely manner to give crops the best possible conditions to grow. With access to the prior year’s data, farmers can anticipate where weeds are likely to grow and target them directly. Today’s sprayers use computer vision and machine learning to detect where weeds are located as the sprayer moves throughout a field, applying herbicide only where it is needed. This not only reduces costs but is also more sustainable.


Thinking Like an Architect

The world we're in is not simple. The applications we build today are complex because they are based on distributed systems, event-driven architectures, asynchronous processing, or scale-out and auto-scaling capabilities. While these are impressive capabilities, they add complexity. Models are an architect’s best tool to tackle complexity. Models are powerful because they shape how people think. Dave Farley illustrated this with an example: long ago, people believed the Earth was at the center of the universe and this belief made the planets' movements seem erratic and complicated. The real problem wasn't the planets' movements but using an incorrect model. When you place the sun at the center of the solar system, everything makes sense. Architects explaining things to others who operate differently may believe that others don't understand when they simply use a different mental model. ... Architects can make everyone else a bit smarter by seeing multiple dimensions. By expanding the problem and solution space, architects enable others to approach problems more intelligently. Often, disagreements arise when two parties view a problem from different angles, akin to debating between a square and a triangle without progress.


CrowdStrike Outage Could Cost Cyber Insurers $1.5 Billion

Most claims will center on losses due to "business interruption, which is a primary contributor to losses from cyber incidents," it said. "Because these losses were not caused by a cyberattack, claims will be made under 'systems failure' coverage, which is becoming standard coverage within cyber insurance policies." But, not all systems-failure coverage will apply to this incident, it said, since some policies exclude nonmalicious events or have to reach a certain threshold of losses before being triggered. The outage resembled a supply chain attack, since it took out multiple users of the same technology all at once - including airlines, doctors' practices, hospitals, banks, stock exchanges and more. Cyber insurance experts said the timing of the outage will also help mitigate the quantity of claims insurers are likely to see. At the moment CrowdStrike sent its update gone wrong, "more Asia-Pacific systems were online than European and U.S. systems, but Europe and the U.S. have a greater share of cyber insurance coverage than does the Asia-Pacific region," Moody's Reports said. The outage, dubbed "CrowdOut" by CyberCube, led to 8.5 million Windows hosts crashing to a Windows "blue screen of death" and then getting stuck in a constant loop of rebooting and crashing.


Open-source AI narrows gap with proprietary leaders, new benchmark reveals

As the AI arms race intensifies, with new models being released almost weekly, Galileo’s index offers a snapshot of an industry in flux. The company plans to update the benchmark quarterly, providing ongoing insight into the shifting balance between open-source and proprietary AI technologies. Looking ahead, Chatterji anticipates further developments in the field. “We’re starting to see large models that are like operating systems for this very powerful reasoning,” he said. “And it’s going to become more and more generalizable over the course of the next maybe one to two years, as well as see the context lengths that they can support, especially on the open source side, will start increasing a lot more. Cost is going to go down quite a lot, just the laws of physics are going to kick in.” He also predicts a rise in multimodal models and agent-based systems, which will require new evaluation frameworks and likely spur another round of innovation in the AI industry. As businesses grapple with the rapid pace of AI advancement, tools like Galileo’s Hallucination Index will likely play an increasingly crucial role in informing decision-making and strategy. 



Quote for the day:

"Uncertainty is a permanent part of the leadership landscape. It never goes away." -- Andy Stanley

Daily Tech Digest - July 29, 2024

Addressing the conundrum of imposter syndrome and LLMs

LLMs, trained on extensive datasets, excel at delivering precise and accurate information across a broad spectrum of topics. The advent of LLMs has undoubtedly been a significant advancement, offering a superior alternative to traditional web browsing and the often tedious process of sifting through multiple sites with incomplete information. This innovation significantly reduces the time required to resolve queries, find answers and move on to subsequent tasks. Furthermore, LLMs serve as excellent sources of inspiration for new, creative projects. Their ability to provide detailed, well-rounded responses makes them invaluable for a variety of tasks, from writing resumes and planning trips to summarizing books and creating digital content. This capability has notably decreased the time needed to iterate on ideas and produce polished outputs. However, this convenience is not without its potential risks. The remarkable capabilities of LLMs can lead to over-reliance, in which we depend on them for even the smallest tasks, such as debugging or writing code, without fully processing the information ourselves.


Enhancing threat detection for GenAI workloads with cloud attack emulation

Detecting threats in GenAI cloud workloads should be a significant concern for most organizations. Although this topic is not heavily discussed, it is a ticking time bomb that might explode only when attacks emerge or if compliance regulations enforce threat detection requirements for GenAI workloads. ... Automatic inventory systems are required to track organizations’ GenAI workloads. This is a critical requirement for threat detection, the basis for security visibility. However, this might be challenging in organizations where security teams are unaware of GenAI adoption. Similarly, only some technical tools can discover and maintain an inventory of GenAI cloud workloads. ... Most cloud threats are not actual vulnerabilities but abuses of existing features, making the detection of malicious behavior challenging. This is also a challenge for rule-based systems since they are not always able to identify intelligently when API calls or log events indicate malicious events. Therefore, event correlation is leveraged to formulate possible events indicating attacks. GenAI has several abuse cases, e.g., prompt injections and training data poisoning. 


Thriving in the AI Era: A 7-Step Playbook For CEOs

Integrating AI into the workplace requires a fundamental shift in how businesses approach employee education and skill development. Leaders must now prioritize lifelong learning and reskilling initiatives to ensure their workforce remains competitive in an AI-driven market. This involves not only technical training but also fostering a culture of continuous learning. By investing in upskilling programs, businesses can equip employees with the proper knowledge and capabilities to work alongside AI technologies. ... The potential risks associated with AI, such as biases, data breaches and misinformation, underscore the urgent need for ethical AI practices. Business leaders must establish robust governance frameworks to ensure that AI technologies are developed and deployed responsibly. This includes implementing standards for fairness, accountability, and transparency in AI systems. ... Maximizing human potential requires creating work environments that facilitate “flow states,” where individuals are fully immersed and engaged in their tasks. Psychologist Mihaly Csikszentmihalyi’s concept of flow theory highlights the importance of focused, distraction-free work periods for enhancing performance.


Benefits and Risks of Deploying LLMs as Part of Security Processes

Advanced LLMs hold tremendous promise to reduce the workload of cybersecurity teams and to improve their capabilities. AI-powered coding tools have widely penetrated software development. Github research found that 92% of developers are using or have used AI tools for code suggestion and completion. Most of these “copilot” tools have some security capabilities. Programmatic disciplines with relatively binary outcomes such as coding (code will either pass or fail unit tests) are well suited for LLMs. ... As a new technology with a short track record, LLMs have serious risks. Worse, understanding the full extent of those risks is challenging because LLM outputs are not 100% predictable or programmatic. ... As AI systems become more capable, their information security deployments are expanding rapidly. To be clear, many cybersecurity companies have long used pattern matching and machine learning for dynamic filtering. What is new in the generative AI era are interactive LLMs that provide a layer of intelligence atop existing workflows and pools of data, ideally improving the efficiency and enhancing capabilities of cybersecurity teams. 


NIST releases new tool to check AI models’ security

The guidelines outline voluntary practices developers can adopt while designing and building their model to protect it against being misused to cause deliberate harm to individuals, public safety, and national security. The draft offers seven key approaches for mitigating the risks that models will be misused, along with recommendations on how to implement them and how to be transparent about their implementation. “Together, these practices can help prevent models from enabling harm through activities like developing biological weapons, carrying out offensive cyber operations, and generating child sexual abuse material and nonconsensual intimate imagery,” the NIST said, adding that it was accepting comments on the draft till September 9. ... While the SSDF is broadly concerned with software coding practices, the companion resource expands the SSDF partly to address the issue of a model being compromised with malicious training data that adversely affects the AI system’s performance, it added. As part of the NIST’s plan to ensure AI safety, it has further proposed a separate plan for US stakeholders to work with others around the globe on developing AI standards.


Data Privacy Compliance Is an Opportunity, Not a Burden

Often, businesses face challenges in ensuring that the consent categories set by their consent management platforms (CMPs) are accurately reflected in their data collection processes. This misalignment can result in user event data inappropriately entering downstream tools. With advanced consent enforcement, customers can now effortlessly synchronize their consent categories with their data collection and routing strategies, eliminating the risk of sending user event data where it shouldn’t be. This establishes a robust connection between the CMP and the data collection engine, ensuring that they consistently align and preventing any unintended data leaks or misconfigurations. Moreover, leaders should consider minimizing the data they collect by ensuring it genuinely advances re-targeting efforts. ... Customers are more interested in protecting their data – and more pessimistic about data privacy – than ever. Organizations can capitalize on this sentiment by becoming robust data stewards. Embracing data privacy as an opportunity rather than a burden can lead to improved outcomes, stronger customer relationships, and a competitive advantage in the market. 


The impact of AI on mitigating risks in hiring processes: Combating employee fraud

There are different ways through which AI is transforming the entire hiring process and eliminating fraud. But to begin with, we must comprehend the many forms that candidate fraud manifests. It may take place in multiple ways, such as plain lying on resumes, falsifying credentials, or even identity theft. These may consist of intentional misrepresentations or omissions, such as when an applicant doesn’t disclose his/her history of being involved in a crime. Because of this, companies may suffer significant financial losses, sharp declines in production, or even legal problems as a result. In this case, artificial intelligence can help. ... AI is also capable of probing applicant behaviour throughout the recruiting process. Through the utilisation of facial recognition technology, machine learning algorithms can evaluate interview responses and communication styles. These systems can identify subtle facial expressions to identify indicators of deceit or uneasiness. Additionally, voice analysis can be used to spot odd shifts in speech patterns and tonality, providing important details about a candidate’s authenticity.


Balancing Technology with Personal Touch: The Evolution of Digital Lending

The best way to get someone on your side is to invite them into the battle. We brought in some of our retail partners to provide feedback on how the application looks and feels from their perspective. We also involved loan officers who are part of the application intake experience. They were able to provide quick, immediate feedback on the spot and we were able to make changes based on their input. By involving employees in the process, they felt like their voice was heard and they had a seat at the table. ... This approach to employee engagement in digital transformation aligns with broader trends in change management and organizational psychology. Companies across industries are recognizing that successful digital transformations require not just technological upgrades, but also cultural shifts and employee buy-in. ... As financial institutions continue to navigate the digital transformation of lending processes, the key to success lies in balancing technological innovation with a deep understanding of customer needs and a commitment to employee engagement. By embracing change while maintaining a focus on personalized service, banks like Broadway Bank are well-positioned to thrive in the evolving landscape of digital lending.


The True Cost of a Major Network or Application Failure

When critical communication and collaboration tools falter, the consequences extend far beyond immediate revenue loss. Employees experience downtime, productivity declines, and customers may face disruptions in service, leading to dissatisfaction and potential churn. The negative publicity surrounding major outages can further damage a company's brand reputation, eroding stakeholder trust. ... Common issues like dropped calls, delays in joining meetings, and poor audio/video quality issues affecting only a handful of users may seem minor when viewed individually, but their collective toll can be significant. These issues strain IT resources, create a backlog of tickets, and decrease employee morale and job satisfaction. ... To address the challenges posed by network and application failures, it’s clear organizations must be more proactive in setting up monitoring and incident response strategies. After all, receiving real-time insights into the health and performance of UCaaS and SaaS platforms more generally can enable IT teams to identify and address issues before they escalate. Further, implementing robust incident management protocols and conducting regular performance assessments are crucial to minimizing downtime and maximizing operational efficiency.


Gartner Predicts 30% of Generative AI Projects Will Be Abandoned After Proof of Concept By End of 2025

A major challenge for organizations arises in justifying the substantial investment in GenAI for productivity enhancement, which can be difficult to directly translate into financial benefit, according to Gartner. ... “Unfortunately, there is no one size fits all with GenAI, and costs aren’t as predictable as other technologies,” said Sallam. “What you spend, the use cases you invest in and the deployment approaches you take, all determine the costs. Whether you’re a market disruptor and want to infuse AI everywhere, or you have a more conservative focus on productivity gains or extending existing processes, each has different levels of cost, risk, variability and strategic impact.” ... By analyzing the business value and the total costs of GenAI business model innovation, organizations can establish the direct ROI and future value impact, according to Gartner. This serves as a crucial tool for making informed investment decisions about GenAI business model innovation. If the business outcomes meet or exceed expectations, it presents an opportunity to expand investments by scaling GenAI innovation and usage across a broader user base, or implementing it in additional business divisions,” said Sallam.



Quote for the day:

"The signs of outstanding leadership are found among the followers." -- Max DePree

Daily Tech Digest - July 28, 2024

India's tech revolution fuels the rise of the managed services industry

Companies, eager to leverage cloud computing, AI, and improved solutions, are seeking reliable partners to manage the complexity. With a massive talent pool of STEM graduates, dynamic IT infrastructure, and supportive policies, the world is looking towards India to serve as a primary player in this space. India's arduous journey to becoming a global tech giant is the result of decades of investment of time, money, and energy into meticulously planning and taking continuous strides. Companies from around the world are shifting significant parts of their IT and business spend to India to drive cost optimisation. ... The pandemic acted as a catalyst for a revolution in managed IT services. Agility, resilience, and enhanced cybersecurity became critical overnight and the rise in remote work led to a surge in demand for managed services. Cybersecurity concerns also skyrocketed both during and after the pandemic and innovation in data handling was imperative. Consequently, many companies are pivoting towards managed service providers to strengthen their computing power, data analysis, and cybersecurity measures.


5 Innovatinve Cybersecurity Measures App Developers Should Incorporate in the Digital Transformation Race

The impending era of quantum computing will give the expected boost to digital transformation; however, this technological innovation to classic computing poses a significant challenge to traditional encryption methods due to its exceptional computing power that hackers can leverage to launch unprecedented brute force attacks that can decrypt passwords and crack encryptions in seconds or minutes. App developers must integrate post-quantum cryptographic features to withstand the computational power of quantum computers. ... By incorporating ZTNA and multifactor authentication (MFA), app developers can proactively prevent data breaches by thoroughly verifying the trustworthiness of any user or device trying to access the organization's networks. The multifactor authentication feature adds a layer of security to VPN access by requiring multiple verification forms; users’ verification methods can include passwords, unique OTP codes sent to mobile devices, or biometric authentication, such as fingerprint, eye scan, voice recognition, hand geometry, or facial recognition, before granting network access.


How to Use Self-Healing Code to Reduce Technical Debt

The idea of self-healing code with LLMs is exciting, but balancing automation and human oversight is still crucial. Manual reviews are necessary to ensure AI solutions are accurate and meet project goals, with self-healing code drastically reducing manual efforts. Good data housekeeping is vital, and so is ensuring that teams are familiar with best practices to ensure optimal data management for feeding AI technology, including LLMs and other algorithms. This is particularly important for cross-department data sharing, with best practices including conducting assessments, consolidations, and data governance and integration plans to improve projects. None of this could take place without enabling continuous learning across your staff. To encourage teams to use these opportunities, leaders should carve out dedicated time for training workshops that offer direct access to the latest tools. These training sessions could be oriented around certifications like those from Amazon Web Services (AWS), which can significantly incentivize employees to enhance their skills. By doing this, a more efficient and innovative software development environment can be achieved.


Chaos Management in Software: A Guide on How to Conduct it

When too much development occurs too soon, this is one symptom that chaos may be present in an organization. Growth is usually beneficial, but not when it causes chaos and confusion. Companies also exhibit indications of disorder when they overstretch their operational capacity or resources, such as money or people, creating an unstable atmosphere for both employees and consumers. ... In the work environment, we can see how it can negatively and positively affect our output. It is essential to be in a healthy work environment that offers employees the opportunity to succeed and be rewarded for their achievements. The problem with chaos is that it can cause an unhealthy work environment, negatively affecting the worker’s productivity, quality of work, and physical health. Chaos in the workplace also impacts team building because when people are in a chaotic space, they cannot focus on anything other than how they feel at that moment. We have all been there – that one time when we did not get enough sleep, or the project was due tomorrow morning, or we needed to wake up early to get that presentation done before your meeting started. 


CIOs must reassess cloud concentration risk post-Crowdstrike

Cloud concentration risk is now arising when these enterprises rely worryingly on a single cloud service provider (CSP) for all their critical business needs. In effect this has shifted reliance on their own data center to now storing all data, running all applications on a single cloud infrastructure. Cloud concentration risk is then fully realized when any one incident, like the CrowdStrike outage, can disrupt your entire operation. With enterprises increasingly dependent on the same applications and cloud providers, this can be devastating at scale, as we’ve seen with CrowdStrike. Such a scenario extends to security breaches and other events that can have more systemic impact on countries and industries. ... Toavoid the dangers of cloud concentration risk, a multi-cloud strategy,in which business workloads are spread across multiple cloud providers, is vital. With a multi-cloud strategy in place, when one provider has an issue, your operations in the other clouds can keep things running. The alternate is to adopt a hybrid cloudapproach,combiningprivate and public cloud. This gives you more control over proprietary and sensitive data whilst still having all the benefits of public cloud scalability.


With ‘Digital Twins,’ The Doctor Will See You Now

Doctors who use the system can not only measure the usual stuff, like pulse and blood pressure, but also spy on the blood’s behavior inside the vessel. This lets them observe swirls in the bloodstream called vortices and the stresses felt by vessel walls — both of which are linked to heart disease. ... We drew a lot from the way they were already optimizing graphics for these computers: The 3D mesh file that we create of the arteries is really similar to what they make for animated characters. The way you move a character’s arm and deform that mesh is the same way you would put in a virtual stent. And the predictions are not just a single number that you want to get back. There’s a quantity called “wall shear stress,” which is just a frictional force on the wall. We’ve shown that when doctors can visualize that wall shear stress at different parts of the artery, they may actually change the length of the stent that they choose. It really informs their decisions. We’ve also shown that, in borderline cases, vorticity is associated with long-term adverse effects. So doctors can see where there’s high vorticity. It could help doctors decide what type of intervention is needed, like a stent or a drug.


What Are the Five Pillars of Data Resilience?

The first is the most basic one: Do you have data backed up in the right way? That seems very straightforward, but you’d be shocked by how many companies don’t have the right backup strategy in place. And that’s vital because our research tells us that 93% of ransomware attackers go for the backups first. ... So, the second pillar is, can you recover quickly from a breach? What’s your recovery strategy, and can you get to your recovery time objective and recovery point objective? Third is data freedom, which is not often talked about. There are many instances where you’ll just need to change your tech stack. You may see a better tech solution, or companies may just change their posture. No matter what choice you make, you need your data to travel with you with minimal fuss. ecurity is fourth. Do you have the right malware protection? Are you able to detect changing patterns, even of your own employees to mitigate insider threats? And there’s obviously table stakes, like multifactor authentication, end-to-end security, etc. And then the last pillar we look at is data intelligence.


Navigating the Future with Cloud-Ready, Customer-Centric Innovations

One of CFOS’s most transformative aspects is its cloud-based infrastructure. SCC realised that the traditional on-premises servers were becoming a bottleneck, limiting scalability and flexibility. By moving to the cloud, SCC gained the ability to dynamically scale resources according to demand, reducing upfront costs and minimising maintenance challenges. This shift optimised resource utilisation and provided a more agile platform for future growth and technological advancements. “Transitioning from on-premises servers to a cloud solution significantly enhanced SCC’s operational strategy,” Lee revealed. “Previously, managing and scaling physical servers posed challenges, particularly in cost and availability of relevant skill-sets, solutions and resources.” The cloud integration resolved these challenges by enabling SCC to scale resources as needed. This approach enhanced cost efficiency and allowed the organisation to quickly adapt to changing demands. By transitioning to the cloud, SCC was able to manage resources dynamically, accommodating peak loads and supporting future growth without the limitations of physical infrastructure.


The Ultimate Roadmap to Modernizing Legacy Applications

First, organizations should conduct an assessment of their application portfolios to determine which apps are eligible for modernization, whether that be containerization, cloud migration, refactoring or another route. This can help government IT leaders prioritize which apps to upgrade. It also gives teams a comprehensive picture of the entire application portfolio: performance, health, average age, security gaps, container construction and more. “Having an inventory of all of your applications can help you avoid duplicative investments and paint a clearer picture of how that application fits into your organization’s long-term strategy,” says Greg Peters, founder of strategic application modernization assessment (SAMA) at CDW. ... The next critical step is to map dependencies before beginning the actual modernization. “Even a minor change to the functionality of a core system can have major downstream effects, and failing to account for any dependencies on legacy apps slated for modernization can lead to system outages and business interruptions,” Hitachi Solutions notes.


Fully Homomorphic Encryption (FHE) with silicon photonics – the future of secure computing

FHE requires specialist hardware and considerable amounts of processing power, leading to high energy consumption and increased costs. However, FHE enabled by silicon photonics — using light to transmit data — offers a solution that could make FHE more scalable and efficient. Current electronic hardware solutions systems are reaching their limits, struggling to handle the large volumes of data and meet the demands of FHE. However, silicon photonics can significantly enhance data processing speed and efficiency, reduce energy consumption and lead to large-scale implementation of FHE. This can unlock numerous possibilities for data privacy across various sectors, including healthcare, finance and government, in areas such as AI, data collaboration and blockchain. This could potentially lead to significant progress in medical research, fraud detection and enable large scale collaboration across industries and geographies. ... FHE is set to transform the future of secure computing and data security. By enabling computations on encrypted data, FHE offers new levels of protection for sensitive information, addressing critical challenges in privacy, cloud security, regulatory compliance, and data sharing. 



Quote for the day:

“The road to success and the road to failure are almost exactly the same.” -- Colin R. Davis