Daily Tech Digest - March 06, 2024

From AML to cybersecurity: The evolving challenges of bank compliance

For banks, it is a strategic necessity to protect their financial health and reputational standing. The ability to effectively identify, assess, and mitigate these threats is critical in safeguarding against operational disruptions and legal repercussions. In this high-stakes environment, the adoption of advanced solutions, particularly automation technology, is becoming increasingly important. These tools are not merely operational aids but strategic assets that streamline compliance processes and facilitate adherence to the constantly evolving regulatory landscape. ... KYC compliance focuses on verifying client identities and assessing their financial behavior, while AML efforts are aimed at preventing money laundering through transaction monitoring and analysis. These measures serve multiple roles in banking risk and compliance, including reducing operational risk by preventing illegal activities, mitigating legal and regulatory risks to avoid fines and reputational damage, and safeguarding the financial system and society from financial crimes.


How Fintech Is Disrupting Traditional Banks in 2024

Broadly speaking, incumbent banks have adapted well to the past decade’s wave of fintech innovation, while startups have also managed to carve out meaningful market share. Both were able to drive and adapt to changing technology in the consumer banking space. Neobanks like Chime, SoFi and Varo found success providing “new front doors” for consumers — between them, the three companies’ apps were downloaded over 8 million times in 2023 alone. Meanwhile, incumbents were able to quickly adopt neobanks’ more attractive features like zero overdraft fees and continue to see substantial user base growth. Mobile app download data suggests incumbents and disruptors are both winning the race to be consumers’ primary financial relationship. On the business banking side, startup neobanks like Mercury and Brex benefited from early 2023 bank instability — receiving an estimated 29% of Silicon Valley Bank (SVB) deposit outflows. ... By facilitating “hands-off” investment and trading, the rise of roboadvisors opened the door to millions of consumers who were otherwise unreachable to wealth and asset management companies.


Suptech on the Rise As Consumer Protection & Prudential Banking Prioritised

A cultural shift is taking place alongside the digital transformation, with financial authorities creating new roles to drive suptech adoption, training staff, and collaborating across the supervisory ecosystem. Surveyed financial authorities report the biggest impact of their suptech implementation is the speed with which they are able to respond to emerging risks and take supervisory action (76 per cent). They also cite more efficient information flows between consumers and supervisors (65 per cent). This enables better and more transparent data analysis and timely response to potential issues. Suptech initiatives also positively impact consumer outcomes (52 per cent). Consequently, there has been improved protection and increased confidence in financial markets. ... “The diverse perspectives from the global supervisory community reflected in State of SupTech Report serve as the guiding force in shaping our research, training programs, and digital tools. This year’s report dives particularly deeply into the strategies and structures that dictate data flows within financial authorities, which necessarily inform how suptech solutions can be tailored and harmonised with existing supervisory processes.


Cybersecurity in the Cloud: Integrating Continuous Security Testing Within DevSecOps

To successfully integrate Continuous Security Testing (CST), you must prepare your cloud environment first. Use a manual tool like OWASP or an automated security testing process to perform a thorough security audit and ensure your cloud environments are well-protected to lay a robust groundwork for CST. Before diving into integrating Continuous Security Testing (CST) within your cloud infrastructure, it's crucial to lay a solid foundation by meticulously preparing your cloud environment. This preparatory step involves conducting a comprehensive security audit to identify vulnerabilities and ensure your cloud architecture is fortified against threats. Leveraging tools such as the Open Web Application Security Project (OWASP) for manual evaluations or employing sophisticated automated security testing processes can significantly aid this endeavor. Conduct a detailed inventory of all assets and resources within your cloud architecture to assess your cloud environment's security posture. This includes everything from data storage solutions and archives to virtual machines and network configurations.


How Leaders Can Instill Hope In Their Teams

“When something is meaningful, it helps us to answer the question ‘Why am I here?’ Amid the cost-of-living crisis and general world instability, it is important that employees are able to foster meaning in their work, as it is meaning that also brings hope to the day to day.” ... “The rising tide of conflict, complaints and concerns that we are seeing in our workplaces is contributing to high levels of anxiety and depression,” says David Liddle, CEO and chief consultant at mediation provider The TCM Group and author of Managing Conflict. “When people are spending their working days in toxic cultures, where incivility, bullying, harassment and discrimination are rife, it has a huge impact on both their physical and mental health.” ... Servantie argues that to tackle employee disengagement, leaders should “lead and inspire by example, showing that belief in change is possible, even in difficult times”. She says: “They should also remain steadfast in purpose and prioritize the growth of individuals over the growth of companies. Finally, communication and transparency in leadership are fundamental.


How to create an efficient governance control program

Your journey toward robust governance control begins with establishing a solid foundation. A house built on a shaky foundation will collapse over time. The framework of foundational practices and addressing cultural shift to security as a business concept, not a technology problem, is therefore key. It is an incremental development of proven practices to then start gauging your overall maturity and path to continuous improvement. You will need to measure and plan for today and look ahead to where you want to be. To get this view, you need to stand on solid ground, and that starts off with your governance program. While navigating this step, it’s important for you to understand your regulatory environment and build capabilities to support the compliance of your internal program to that of your sector. Bringing in stakeholder and business context will align practices to support risk management and also compliance. The controls in place will have the benefit of being informed of the requirements for control as well as a capability that will enforce a by-product of compliance. 


4 tabletop exercises every security team should run

Third-party risk management (TPRM) exercise participants should include representatives from key downstream business partners — partners who supply goods and services to the enterprise — as well as your cyber insurance provider, law enforcement, and all key stakeholders, often including the board of directors and senior management. While supply-chain attacks are ubiquitous, often they are misidentified because the actual attack might be initially identified as ransomware, an advanced persistent threat, or some other cyber threat. Often it requires the forensics team post-breach investigation to identify that the attack came through a trusted third party. ... Insider threats come in two primary types: malicious insiders who deliberately compromise corporate assets for personal, financial, political or some other gain, and those who create a security vulnerability either accidentally or simply due to lack of knowledge but without malice. In the former case, a deliberate crime against the company is committed. The latter case might involve either a user error or perhaps a user taking an action that seems reasonable to them to perform their jobs but could create a vulnerability. 


Digital Twins Are the Next Wave of Innovation, and Australia Needs to Move Quickly

In fact, in many ways, the journey of the digital twin seems to be parallel to the story of both digital transformation and AI before it — a lack of understanding of what digital twins are leads to excitement and investment, but without the right understanding, the risk of failure is higher. Gavin Cotterill, founder and managing director of Australian digital twin consultancy GC3 Digital, said in an interview with IoT Hub: “A lot of people think digital twin is just focused on a flashy 3D model, but effectively it is a master data management strategy.” “You need good quality data to support that decision making and the quality of our data, generally, is pretty poor. We have a lot of data, but we don’t know what to do with it,” Cotterill said. “Data governance, data strategy is the unsexy part of digital twin — it’s the engine room, it’s the fuel.” This means IT leaders face competing challenges with regard to digital twins. On the one hand, the appetite is there, particularly among those executives and boards to be aware of the bleeding edge of technology. On the other hand, Australian organisations, as a whole, are not ready to tackle the digital twin opportunity.


Longer coherence: How the quantum computing industry is maturing

On-premise quantum computers are currently rarities largely reserved for national computing labs and academic institutions. Most quantum processing unit (QPU) providers offer access to their systems via their own web portals and through public cloud providers. But today’s systems are rarely expected (or contracted) to run with the five-9s resiliency and redundancy we might expect from tried and tested silicon hardware. “Right now, quantum systems are more like supercomputers and they're managed with a queue; they're probably not online 24 hours, users enter jobs into a queue and get answers back as the queue executes,” says Atom’s Hays. “We are approaching how we get closer to 24/7 and how we build in redundancy and failover so that if one system has come offline for maintenance, there's another one available at all times. How do we build a system architecturally and engineering-wise, where we can do hot swaps or upgrades or changes with minimal downtime as possible?” Other providers are going through similar teething phases of how to make their systems – which are currently sensitive, temperamental, and complicated – enterprise-ready for the data centers of the world.


Why Blockchain Payments Are Misunderstood

Comparing a highly regulated system to one that sits in a gray area can be misleading. Many crypto-based remittance applications do little or no know-your-customer and anti-money laundering checks, which are costly and difficult to run. This is a cost advantage that is unlikely to last. Low levels of competition are another big driver in high payment costs. This is true both for business-to-business and consumer-to-consumer payments. ... On the business side, blockchains can drive costs down and build sustainable advantage through differentiated technology. While it is true that main-net transaction costs in Ethereum are higher, the addition of smart contract functionality changes the equation entirely. Enterprises issue payments to each other usually as part of a complex agreement. This usually means not only verifying receipt of goods or services, but also compliance with the agreed upon terms. ... Right now, the kind of fully digital end-to-end systems that smart contracts enable are the province of the world’s biggest companies. With scale and deep pockets, big companies have built integrated systems without blockchains. 



Quote for the day:

"If you don't understand that you work for your mislabeled 'subordinates,' then you know nothing of leadership. You know only tyranny." -- Dee Hock

Daily Tech Digest - March 05, 2024

Experts Warn of Risks in Memory-Safe Programming Overhauls

Memory-safety vulnerabilities can allow hackers, cybercriminals and foreign adversaries to gain unauthorized access to federal systems, they said. But the experts also warned that the challenge of migrating legacy code and information technology written in non-memory-safe languages could be too unrealistic and risky for most organizations to undertake. "Strategically focusing on eradicating memory-corruption vulnerabilities is crucial, due to their prevalence," said Chris Wysopal, co-founder and chief technology officer of Veracode. "However, completely rewriting existing software in memory-safe languages is impractical, expensive and could introduce new vulnerabilities." The report says experts have identified programming languages such as C and C++ in critical systems "that both lack traits associated with memory safety and also have high proliferation." While most enterprise software and mobile apps are already written in memory-safe languages, developers still prioritize performance over security under some scenarios, according to Jeff Williams, co-founder and chief technology officer of the security firm Contrast Security.


Hackers exploited Windows 0-day for 6 months after Microsoft knew of it

The vulnerability Lazarus exploited, tracked as CVE-2024-21338, offered considerably more stealth than BYOVD because it exploited appid.sys, a driver enabling the Windows AppLocker service, which comes pre-installed in the Microsoft OS. Avast said such vulnerabilities represent the “holy grail,” as compared to BYOVD. In August, Avast researchers sent Microsoft a description of the zero-day, along with proof-of-concept code that demonstrated what it did when exploited. Microsoft didn’t patch the vulnerability until last month. Even then, the disclosure of the active exploitation of CVE-2024-21338 and details of the Lazarus rootkit came not from Microsoft in February but from Avast 15 days later. A day later, Microsoft updated its patch bulletin to note the exploitation. It’s unclear what caused the delay or the initial lack of disclosure. Microsoft didn’t immediately have answers to questions sent by email. ... Once in place, the rootkit allowed Lazarus to bypass key Windows defenses such as Endpoint Detection and Response, Protected Process Light—which is designed to prevent endpoint protection processes from being tampered with—and the prevention of reading memory and code injection by unprotected processes.


How GenAI helps entry-level SOC analysts improve their skills

“There’s a specific set of analysts who can open it at any point in the user experience, with the context of the selected customer and all the data on their alerts and with access to our proprietary data sets,” he says. “Then the analysts can interact with it and ask questions about the investigation, such as what the next action should be.” As part of the staged rollout process for the GenAI features, Secureworks has built feedback loops that allow analysts to rate the results that the AI provides. Then the results go back to the data scientists and prompt engineers, who revise the prompts and the contextual information provided to the AI. Integrating generative AI revolutionized the way Secureworks’ junior analysts approach security operations, says Radu Leonte, the company’s VP of security operations. Instead of focusing exclusively on repetitive triage tasks, they can now handle comprehensive triage, investigation, and response. They can now triage alerts faster because all the supplementary data is brought into the platform, together with summaries and explanations, Leonte says. The accuracy and quality of triage increases as well because of fewer human comprehension errors and fewer missed detections.


Singapore reviews ways to boost digital infrastructures after big outage

The impending Digital Infrastructure Act is among the measures being developed, with the intent to complement existing regulations that focus on mitigating cyber-related risks. The ministry added that the Cybersecurity Act soon will be expanded to include "foundational digital infrastructures", such as cloud service providers and data centers as well as key entities that hold sensitive data and carry out essential public functions. The new digital infrastructure bill also will go beyond cybersecurity to encompass other resilience risks, spanning misconfigurations in technical architectures and physical hazards, such as fires, water leaks, and cooling system failures. The task force will identify digital infrastructures and services that, if disrupted, have a "systemic impact" on Singapore's economy and society. These include cloud services that facilitate the availability of widely-used digital services, such as digital identities, ride-hailing, and payments. The task force also is establishing requirements that regulated entities will be subject to under the Digital Infrastructure Act, which will consider the country's operating landscape and international developments.


Why we need both cloud engineers and cloud architects

Cloud engineers collaborate extensively with software developers and maybe do some ad hoc development. I would, however, not go so far as calling them developers since they do have other duties that are just as important and don’t require coding. What’s critical to being a cloud engineer is being “hands-on” in dealing with the complexities of cloud systems, databases, AI, governance, and security. In many cases, there are special engineering disciplines around these subtechnologies, and certainly certifications that address specifics, such as certified cloud database engineer. On the other hand, a cloud architect plays a strategic role in orchestrating the cloud computing strategy of an organization. They are responsible for designing the overarching cloud environment and ensuring its alignment with business objectives. They are not typically hands-on. They may have specializations as well, such as cloud database architect or cloud security architect. Cloud architects assess business and application requirements to craft scalable cloud solutions using the right mix of technologies. This can entail both cloud and non-cloud platforms. 


Why cyber maturity assessment should become standard practice

There are other clear benefits to the business in determining cyber maturity. By identifying gaps to security controls (and thus potential risks to the organization), it can help with reporting to the board on cyber security posture, while for the C-suite, amid a recession and skills crisis, need to be laser-focused when it comes to invest, being able to pinpoint where and how to dedicate spend is also invaluable. Moreover, as measuring maturity is a proactive risk-based process that seeks to bring about continuous improvement it can also reduce the likelihood and cost of an impact: Kroll’s State of Cyber Defense 2023 report found that those with a high level of cyber maturity experience less security incidents. And being as it is focused on process, cyber maturity can help to embed a security culture within the business. ... But there are also marked differences depending on the size of the business: SMEs will sometimes have less governance such as effective data protection or risk management processes, whereas larger enterprises, while they have the manpower and may even have a dedicated internal audit team, may be stretched or in some cases, inexperienced.


OpenAI’s Defense in Copyright Lawsuit: New York Times “Hacked ChatGPT” To Create Evidence

The “NYT hacked ChatGPT” defense directly addresses claims of damages due to the chatbot being used as a potential substitute for a subscription to the paper, much in the same way that many less sophisticated tools allow for bypassing its paywall. But the defense does not address the broader question of whether OpenAI and others have an inherent right to use a copyrighted work to train an AI model, something that will rely on court interpretations of fair use law. The US fair use doctrine has never had entirely clear terms to cover every circumstance, and is largely built on precedent established by prior court decisions as examples of alleged unauthorized use come up. That is why the outcome of this copyright lawsuit potentially carries a lot of weight. This will be the first direct test of AI use of training materials in this way. How the courts interpret this use will be absolutely vital to the futures of OpenAI and similar companies; OpenAI has already publicly stated that it is impossible to train these types of LLMs without scraping publicly accessible materials from the internet. 


Generative AI Enthusiasm Versus Expertise: A Boardroom Disconnect

Educating business leaders and stakeholders -- including those who self-identify as experts -- will be key for companies in the coming months and years. Analytics and AI experts will need to find better ways to inform key decision-makers about generative AI. That means going beyond the surface to convey an understanding of the underlying technologies, too. Companies that are serious about adopting generative AI across their entire organization must ensure they have the mechanisms to manage risk and adopt the technology responsibly. It isn’t enough for companies to create and implement a governance plan -- they must then expend the energy to enforce the guidelines they have implemented. Otherwise, companies can fall into the trap of making these and other IT policies pointless, opening the door to even greater vulnerabilities and exposure. ... In the meantime, leaders can capitalize on this board enthusiasm to help spread awareness of generative AI's importance and influence funding sources within the company. One key message to convey will be the importance of democratizing the technology’s place within the organization so as many people as possible can unlock its value.


Why your best IT managers quit

“The boss is the classic reason why managers leave,” says Greg Barrett, a senior executive advisor and senior consultant, noting that he has seen this factor, more than money, prompt top talent to resign. Such bosses tend to micromanage and keep tight control on their direct reports, rather than allowing managers the autonomy they want and need to be good leaders themselves, Kozlo says. Bev Kaye, founder and CEO of employee development, engagement, and retention consultancy BevKaye&Co, has heard from plenty of promising professionals who quit their jobs because of a bad boss. “They’d say, ‘My boss was a jerk and I couldn’t stand it anymore.’ "Bosses who are arrogant, condescending, and disrespectful are displaying “jerk behaviors," Kay says. Moreover, top performers complain when their bosses don’t cultivate personal connections that help demonstrate that they, as bosses, have a genuine interest in helping their managers succeed and advance, she says. “We ask people why they leave, and they answer, ‘My boss never really knew me, never really knew the things I loved doing and working on,’” explains Kaye, who points to the complaints she once heard workers voice as they were traveling to an event, a trip they had been given as a reward for their great performance yet they didn’t want.


Defending Operational Technology Environments: Basics Matter

"The idea that you're going to have an air gap or completely segmented or separated OT network is lunacy in this world, outside of nuclear pipelines," Lee said. "But you still don't want it to be where you can open up an email and hit a controller on your network." One test of whether an organization has an adequate focus on the basics is to see how it would fare against an already-seen threat, such as the Stuxnet malware designed to infect OT environments, which first appeared in 2010. "There are still a significant portion of infrastructure asset owners and operators that could not detect that capability today, 13 years later," Lee said. Beyond network segmentation, he said, essential security controls include monitoring ICS networks - less than 5% of which are currently being monitored - as well as requiring multifactor authentication and taking a risk-based approach to managing OT vulnerabilities. All of this remains age-old advice for protecting against current and future cybersecurity risks. "If you do the knowns, if you actually defend against the things that we know how to defend against, you get a lot of value out of the things you may not know about," he said.



Quote for the day:

"Accomplishing goals is not success. How much you expand in the process is." -- Brianna Wiest

Daily Tech Digest - March 04, 2024

Evolving Landscape of ISO Standards for GenAI

The burgeoning field of Generative AI (GenAI) presents immense potential for innovation and societal benefit. However, navigating this landscape responsibly requires addressing potential concerns regarding its development and application. Recognizing this need, the International Organization for Standardization (ISO) has embarked on the crucial task of establishing a comprehensive set of standards. ... A shared understanding of fundamental terminology is vital in any field. ISO/IEC 22989 serves as the cornerstone by establishing a common language within the AI community. This foundational standard precisely defines key terms like “artificial intelligence,” “machine learning,” and “deep learning,” ensuring clear communication and fostering collaboration and knowledge sharing among stakeholders. ... Similar to the need for blueprints in construction, ISO/IEC 23053 provides a robust framework for AI development. This standard outlines a generic structure for AI systems based on machine learning (ML) technology. This framework serves as a guide for developers, enabling them to adopt a systematic approach to designing and implementing GenAI solutions. 


Your Face For Sale: Anyone Can Legally Gather & Market Your Facial Data

We need a range of regulations on the collection and modification of facial information. We also need a stricter status of facial information itself. Thankfully, some developments in this area are looking promising. Experts at the University of Technology Sydney have proposed a comprehensive legal framework for regulating the use of facial recognition technology under Australian law. It contains proposals for regulating the first stage of non-consensual activity: the collection of personal information. That may help in the development of new laws. Regarding photo modification using AI, we’ll have to wait for announcements from the newly established government AI expert group working to develop “safe and responsible AI practices”. There are no specific discussions about a higher level of protection for our facial information in general. However, the government’s recent response to the Attorney-General’s Privacy Act review has some promising provisions. The government has agreed further consideration should be given to enhanced risk assessment requirements in the context of facial recognition technology and other uses of biometric information. 


Affective Computing: Scientists Connect Human Emotions With AI

Affective computing is a multidisciplinary field integrating computer science, engineering, psychology, neuroscience, and other related disciplines. A new and comprehensive review on affective computing was recently published in the journal Intelligent Computing. It outlines recent advancements, challenges, and future trends. Affective computing enables machines to perceive, recognize, understand, and respond to human emotions. It has various applications across different sectors, such as education, healthcare, business services and the integration of science and art. Emotional intelligence plays a significant role in human-machine interactions, and affective computing has the potential to significantly enhance these interactions. ... Affective computing, a field that combines technology with the nuanced understanding of human emotions, is experiencing surges in innovation and related ethical considerations. Innovations identified in the review include emotion-generation techniques that enhance the naturalness of human-computer interactions by increasing the realism of the facial expressions and body movements of avatars and robots. 


The open source problem

Over the years, I’ve trended toward permissive, Apache-style licensing, asserting that it’s better for community development. But is that true? It’s hard to argue against the broad community that develops Linux, for example, which is governed by the GPL. Because freedom is baked into the software, it’s harder (though not impossible) to fracture that community by forking the project. To me, this feels critical, and it’s one reason I’m revisiting the importance of software freedom (GPL, copyleft), and not merely developer/user freedom (Apache). If nothing else, as tedious as the internecine bickering was in the early debates between free software and open source (GPL versus Apache), that tension was good for software, generally. It gave project maintainers a choice in a way they really don’t have today because copyleft options disappeared when cloud came along and never recovered. Even corporations, those “evil overlords” as some believe, tended to use free and open source licenses in the pre-cloud world because they were useful. Today companies invent new licenses because the Free Software Foundation and OSI have been living in the past while software charged into the future. Individual and corporate developers lost choice along the way.


Researchers create AI worms that can spread from one system to another

Now, in a demonstration of the risks of connected, autonomous AI ecosystems, a group of researchers has created one of what they claim are the first generative AI worms—which can spread from one system to another, potentially stealing data or deploying malware in the process. “It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn't been seen before,” says Ben Nassi, a Cornell Tech researcher behind the research. ... To create the generative AI worm, the researchers turned to a so-called “adversarial self-replicating prompt.” This is a prompt that triggers the generative AI model to output, in its response, another prompt, the researchers say. In short, the AI system is told to produce a set of further instructions in its replies. This is broadly similar to traditional SQL injection and buffer overflow attacks, the researchers say. To show how the worm can work, the researchers created an email system that could send and receive messages using generative AI, plugging into ChatGPT, Gemini, and open source LLM, LLaVA. They then found two ways to exploit the system—by using a text-based self-replicating prompt and by embedding a self-replicating prompt within an image file.


Do You Overthink? How to Avoid Analysis Paralysis in Decision Making

Welcome to the world of analysis paralysis. This phenomenon occurs when an influx of information and options leads to overthinking, creating a deadlock in decision-making. Decision makers, driven by the fear of making the wrong choice or seeking the perfect solution, may find themselves caught in a loop of analysis, reevaluation, and hesitation, consequently losing sight of the overall goal. ... Analysis paralysis impacts decision making by stifling risk taking, preventing open dialogue, and constraining innovation—all of which are essential elements for successful technology development. It often leads to mental exhaustion, reduced concentration, and increased stress from endlessly evaluating information, also known as decision fatigue. The implications of analysis paralysis include missed opportunities due to ongoing hesitation and innovative potential being restricted by cautious decision making. ... In the technology sector, the consequences of poor decisions can be far-reaching, potentially unraveling extensive work and achievements. Fear of this happening is heightened due to the sector’s competitive nature. Teams worry that a single misstep could have a cascading negative impact.


30 years of the CISO role – how things have changed since Steve Katz

Katz had no idea what the CISO job was when he accepted it in 1995. Neither did Citicorp. “They said you’ve got a blank cheque, build something great — whatever the heck it is,” Katz recounted during the 2021 podcast. “The CEO said, ‘The board has no idea, just go do something.’” Citicorp gave Katz just two directives after hiring him: “Build the best cybersecurity department in the world” and “go out and spend time with our top international banking customers to limit the damage.” ... today’s CISO must be able to communicate cyber threats in terms that line of business can understand almost instantly. “It’s the ability to articulate risk in a way that is related to the business processes in the organization,” says Fitzgerald. “You need to be able to translate what risk means. Does it mean I can’t run business operations? Does it mean we won’t be able to treat patients in our hospital because we had a ransomware attack?” Deaner says CISOs have an obvious role to play in core infosec initiatives such as implementing a business continuity plan or disaster recovery testing. ... “People in CISO circles absolutely talk a lot about liability. We’re all concerned about it,” Deaner acknowledges. “People are taking the changes to those regulations very seriously because they’re there for a reason.”


Vishing, Smishing Thrive in Gap in Enterprise, CSP Security Views

There is a significant gap between enterprises’ high expectations that their communications service provider will provide the security needed to protect them against voice and messaging scams and the level of security those CSPs offer, according to telecom and cybersecurity software maker Enea. Bad actors and state-sponsored threat groups, armed with the latest generative AI tools, are rushing to exploit that gap, a trend that is apparent in the skyrocketing numbers of smishing (text-based phishing) and vishing (voice-based frauds) that are hitting enterprises and the jump in all phishing categories since the November 2022 release of the ChatGPT chatbot by OpenAI, according to a report this week by Enea. ... “Maintaining and enhancing mobile network security is a never-ending challenge for CSPs,” the report’s authors wrote. “Mobile networks are constantly evolving – and continually being threatened by a range of threat actors who may have different objectives, but all of whom can exploit vulnerabilities and execute breaches that impact millions of subscribers and enterprises and can be highly costly to remediate.”


Causal AI: AI Confesses Why It Did What It Did

Traditional AI models are fixed in time and understand nothing. Causal AI is a different animal entirely. “Causal AI is dynamic, whereas comparable tools are static. Causal AI represents how an event impacts the world later. Such a model can be queried to find out how things might work,” says Brent Field at Infosys Consulting. “On the other hand, traditional machine learning models build a static representation of what correlates with what. They tend not to work well when the world changes, something statisticians call nonergodicity,” he says. It’s important to grok why this one point of nonergodicity is such a crucial difference to almost everything we do. “Nonergodicity is everywhere. It’s this one reason why money managers generally underperform the S&P 500 index funds. It’s why election polls are often off by many percentage points. ... Without knowing the cause of an event or potential outcome, the knowledge we extract from AI is largely backward facing even when it is forward predicting. Outputs based on historical data and events alone are by nature handicapped and sometimes useless. Causal AI seeks to remedy that.


Leveraging power quality intelligence to drive data center sustainability

The challenge is that some data centers lack the power monitoring capabilities necessary for achieving heightened efficiency and sustainability. Moreover, there needs to be more continuous power quality monitoring. Many rely on rudimentary measurements, such as voltage, current, and power parameters, gathered by intelligent rack power distribution units (PDUs), which are then transmitted to DCIM, BMS, and other infrastructure management and monitoring systems. Some consider power quality only during initial setup or occasionally revisit it when reconfiguring IT setups. This underscores the critical role of intelligent PDUs in delivering robust power quality monitoring and the imperative for data center and facility managers to steer efforts toward increased efficiency and sustainability. Certain power quality issues can have detrimental effects on the electrical reliability of a data center, leading to costly unplanned downtime and posing challenges in enhancing sustainability. ... These power quality issues can profoundly affect a data center's functionality and dependability. They may result in unforeseen downtime, harm to equipment, data loss or corruption, and reduced network efficiency. 



Quote for the day:

"If you want to achieve excellence, you can get there today. As of this second, quit doing less-than-excellent work." -- Thomas J. Watson

Daily Tech Digest - March 03, 2024

The most popular neural network styles and how they work

Feedforward networks are perhaps the most archetypal neural net. They offer a much higher degree of flexibility than perceptrons but still are fairly simple. The biggest difference in a feedforward network is that it uses more sophisticated activation functions, which usually incorporate more than one layer. The activation function in a feedforward is not just 0/1, or on/off: the nodes output a dynamic variable. ... Recurrent neural networks, or RNNs, are a style of neural network that involve data moving backward among layers. This style of neural network is also known as a cyclical graph. The backward movement opens up a variety of more sophisticated learning techniques, and also makes RNNs more complex than some other neural nets. We can say that RNNs incorporate some form of feedback. ... Convolutional neural networks, or CNNs, are designed for processing grids of data. In particular, that means images. They are used as a component in the learning and loss phase of generative AI models like stable diffusion, and for many image classification tasks. CNNs use matrix filters that act like a window moving across the two-dimensional source data, extracting information in their view and relating them together. 


The startup CIO’s guide to formalizing IT for liquidity events

“You have to stop fixing problems in the data layer, relying on data scientists to cobble together the numbers you need. And if continuing that approach is advocated by the executives you work with, if it’s considered ‘good enough,’ quit,” he says. “Getting the numbers right at the source requires that you straighten out not only the systems that hold the data, all those pipelines of information, but also the processes whereby that data is captured and managed. No tool will ever entirely erase the friction of getting people to enter their data in a CRM.” The second piece to getting the numbers right comes at the end: closing the books. While this process is a near ubiquitous struggle for all growing companies, Hoyt offers two points of optimism. “First,” he explains, “many teams struggle to close the books simply because the company hasn’t invested in the proper tools. They’ve kicked the can down the street. And second, you have a clear metric of improvement: the number of days taken to close.” Hoyt suggests investing in the proper tools and then trying to shave the days-to-close each quarter. Get your numbers right, secure your company, bring it into compliance, and iron out your ops and infrastructure. 


Majority of commercial codebases contain high-risk open-source code

Advocates of open-source software have long argued that many eyes on code lead to fewer bugs and vulnerabilities, and the report doesn’t disprove that assertion, McGuire said. “If anything, the report supports that belief,” he said. “The fact that there are so many disclosed vulnerabilities and CVEs serves as a testament to how active, vigilant, and reactive the open-source community is, especially when it comes to addressing security issues. It’s this very community that is doing the discovery, disclosure, and patching work.” However, users of open-source software aren’t doing a good job of managing it or implementing the fixes and workarounds provided by the open-source community, he said. The primary purpose of the report is to raise awareness about these issues and to help users of open-source software better mitigate the risks, he said. “We would never recommend any software producer avoid using, or tamp down their usage, of open source,” he added. “In fact, we would argue the opposite, as the benefits of open source far outweigh the risks.” Open-source software has accelerated digital transformation and allowed companies to develop innovative applications that consumers want, he said. 


From gatekeeper to guardian: Why CISOs must embrace their inner business superhero

You, the CISO, are no longer just the security guard at the front gate. You're the city planner, the risk management consultant, the chief resilience officer, and the chief of police all rolled into one. You need to understand the flow of traffic, the critical infrastructure, and the potential vulnerabilities lurking in every alleyway. But how do we, the guardians of the digital realm, transform into these business superheroes? Fear not, fellow CISOs, for the path to upskilling and growth is paved with strategic learning, effective communication, and more than a dash of inspirational or motivational leadership. ... As the lone wolf days have ended, so too have the days when technical expertise alone could guarantee a CISO’s success. Today's CISO needs to be a voracious learner, constantly expanding their knowledge and skills. ... Failure to effectively communicate is a career killer for any CXO. To be influential, especially with the C-suite, CISOs must learn to speak in ways understood by their C-suite peers. Imagine how your eyes may glaze over when a CFO starts talking capex, opex, or EBITDA. Realize the same will happen for these cybersecurity “outsiders.”


Looking good, feeling safe – data center security by design

For data centers in shared spaces, sometimes turning data halls into display features is a way to make them secure. Keeping compute in a secure but openly visible space means it’s harder to do anything unnoticed. It may also help some engineers be more mindful about keeping the halls tidy and cabling neat. “Some people keep data centers behind closed walls and keep them hidden and private. Others use them as features,” says Nick Ewing, managing director at UK modular data center provider EfficiencyIT. “The best ones are the ones where the customers like to make a feature of the environment and use it to use it as a bit of a display.” An example he cites is the Wellcome Sanger Institute in Cambridge, where they have four data center quadrants. Each quadrant is about 100 racks; they have man traps at either end of the data center corridor. But one end of the main quadrant is full of glass. “They have an LED display, which is talking about how many cores of compute, how much storage they’ve got, how many genomic sequences they've they've sequenced that day,” he says. “They've used it as a feature and used it to their advantage.”


Neuromorphic computing: The future of IoT

The adoption of neuromorphic computing in IoT promises many benefits, ranging from enhanced processing power and energy efficiency to increased reliability and adaptability. Here are some key advantages: More Powerful AI: Neuromorphic chips enable IoT devices to handle complex tasks with unprecedented speed and efficiency. By collocating memory and processing and leveraging parallel processing capabilities, these chips overcome the limitations of traditional architectures, resulting in near-real-time decision-making and enhanced cognitive abilities. Lower Power Consumption: One of the most significant advantages of neuromorphic computing is its energy efficiency. By adopting an event-driven approach and utilizing components like memristors, neuromorphic systems minimize energy consumption while maximizing performance, making them ideal for power-constrained IoT environments. Extensive Edge Networks: With the proliferation of edge computing, there is a growing need for IoT devices that can process data locally in real-time. Neuromorphic computing addresses this need by providing the processing power and adaptability required to run advanced applications at the edge, reducing reliance on centralized servers and improving overall system responsiveness.


Decentralizing the AR Cloud: Blockchain's Role in Safeguarding User Privacy

For devices to interpret the world, their camera needs access to have some kind of digital counterpart that it can cross reference. And that digital counterpart of the world is much too complex to fit inside one device. Therefore, the AR cloud has been developed. The AR cloud is a network of computers that work to help devices understand the physical world. ... The AR cloud is akin to an API to the world. The implications for applications that require knowledge about location, context, and more are considerable. In AR, the data is intimate data about where we are, who we are with, what we’re saying, looking at, and even what our living quarters look like. AR devices can read our facial expressions, and more, similar to how the Apple Watch can measure the heart rates of its wearers. Digital service providers will have access to a bevy of information and also insight into our thinking, wants, needs, and desires. Storing that data in a centralized server that is opaque is cause for concern. Blockchain allows people to take that same intimate private data, and put it on their own server from which they could access the wondrous world of AR minus such egregious privacy concerns. 


Five ways AI is helping to reduce supply chain attacks on DevOps teams

Attackers are using AI to penetrate an endpoint to steal as many forms of privileged access credentials as they can find, then use those credentials to attack other endpoints and move throughout a network. Closing the gaps between identities and endpoints is a great use case for AI. A parallel development is also gaining momentum across the leading extended detection and response (XDR) providers. CrowdStrike co-founder and CEO George Kurtz told the keynote audience at the company’s annual Fal.Con event last year, “One of the areas that we’ve really pioneered is that we can take weak signals from across different endpoints. And we can link these together to find novel detections. We’re now extending that to our third-party partners so that we can look at other weak signals across not only endpoints but across domains and come up with a novel detection.” Leading XDR platform providers include Broadcom, Cisco, CrowdStrike, Fortinet, Microsoft, Palo Alto Networks, SentinelOne, Sophos, TEHTRIS, Trend Micro and VMWare. Enhancing LLMs with telemetry and human-annotated data defines the future of endpoint security.


Blockchain transparency is a bug

Transparency isn’t a feature of decentralization that is truly needed to perform on-chain transactions securely — it’s a bug that forces Web3 users to expose their most sensitive financial data to anyone who wants to see it. Several blockchain marketing tools have emerged over the past few years, allowing marketers and salespeople to use the freely flowing on-chain data for user insights and targeted advertising. But this time, it’s not just behavioral data that is analyzed. Now, your most sensitive financial information is also added to the mix. Web3 will never become mainstream unless we manage to solve this transparency problem. Blockchain and Web3 were an escape from centralized power, making information transparent so that centralized entities cannot own one’s data. Then 2020 came, Web3 and NFTs boomed, and many started talking about how free flowing, available-to-all data is a clear improvement from your data being “stolen” by big data companies as a customer. Some may think if everyone can see the data, transparency will empower users to take ownership of and profit from their own data. Yet, transparency does not mean data can’t be appropriated nor that users are really in control.


Key Considerations to Effectively Secure Your CI/CD Pipeline

Effective security in a CI/CD pipeline begins with the definition of clear and project-specific security policies. These policies should be tailored to the unique requirements and risks associated with each project. Whether it's compliance standards, data protection regulations, or industry-specific security measures (e.g., PCI DSS, HDS, FedRamp), organizations need to define and enforce policies that align with their security objectives. Once security policies are defined, automation plays a crucial role in their enforcement. Automated tools can scan code, infrastructure configurations, and deployment artifacts to ensure compliance with established security policies. This automation not only accelerates the security validation process but also reduces the likelihood of human error, ensuring consistent and reliable enforcement. In the DevSecOps paradigm, the integration of security gates within the CI/CD pipeline is pivotal to ensuring that security measures are an inherent part of the software development lifecycle. If you set up security scans or controls that users can bypass, those methods become totally useless — you want them to become mandatory.



Quote for the day:

"It is better to fail in originality than to succeed in imitation." -- Herman Melville

Daily Tech Digest - March 02, 2024

Rust on the Rise: New Advocacy Expected to Advance Adoption

Recent advocacy and research efforts from agencies like the National Security Agency (NSA), Cybersecurity and Infrastructure Security Agency (CISA), National Institute of Standards and Technology (NIST), and ONCD “can serve as valuable evidence of the considerable risk memory-safety vulnerabilities pose to our digital ecosystem,” the Rust Foundation‘s Executive Director & CEO, Rebecca Rumbul, told The New Stack. Moreover, Rumbul said The Rust Foundation believes that the Rust programming language is the most powerful tool available to address critical infrastructure security gaps. “As an organization, we are steadfast in our commitment to further strengthening the security of Rust through programs like our Security Initiative,” she said. Meanwhile, looking specifically at software development for space systems, the ONCD report says: both memory-safe and memory-unsafe programming languages meet the organization’s requirements for developing space systems. “At this time, the most widely used languages that meet all three properties are C and C++, which are not memory-safe programming languages, the report said.


The Power of Hyperautomation in Banking

Hyperautomation improves the operational efficiency within banks significantly as it helps in automating routine processes, that include document processing, transaction reconciliations, data entry, decreasing the requirement for manual intervention. Therefore, this not only augments processes but it also reduces errors, leading to a more reliable as well as cost-effective operation. Banks can use hyperautomation to offer personalized, 24/7 services to their customers. Chatbots & virtual assistants powered by Artificial Intelligence can respond to inquiries as well as perform transactions around the clock. Faster response times coupled with the ability for tailoring services to separate customer requirements leading to enhanced customer satisfaction as well as loyalty. “Hyperautomation facilitates organizations to improve customer experience by reducing the friction in user self-service applications and streamlining broken onboarding processes. It enables faster support and sales query resolution through relevant integrations, AI/ML, and assistive technologies,” says Arvind Jha, Former General Manager – Product Management and Marketing, Newgen Software.


What Is Data Completeness and Why Is It Important?

Data completeness is an important aspect of Data Quality. Data Quality is a reference to how accurate and reliable the data is overall. Data completeness specifically focuses on missing data or how complete the data is, rather than concerns of inaccurate or duplicated data. A lack of data completeness is normally the result of information that was never collected. For example, if a customer’s name and email address are supposed to be collected, but the email address is missing, it is difficult to communicate with the customer. ... Missing chunks of information restrict or bias the decision-making process. Attempting to perform analytics with incomplete data can produce blind spots and biases, and result in missed opportunities. Currently, business leaders use data analytics to make decisions that range from marketing to investment strategies to medical diagnostics. In some situations, data missing key pieces of information is still used, which can lead to dangerous mistakes and false conclusions. Assessing and improving data completeness should be done before performing analytics.


A socio-technical approach to data management is crucial in our decentralised world

To improve the odds of successfully building an effective data management strategy, working with a trusted and experienced data partner to help shift the organisation’s data culture is a crucial - and often missing - step. The Data and Analytics Leadership Annual Executive Survey 2023 found that cultural factors are the biggest obstacle to delivering value from data investments. Data fabrics, meshes and modern data stacks will continue to consolidate an increasingly decentralised world by making the management of data easier. However, to ensure control over security and governance, and to extract value from data that is trustworthy requires a tactical shift to what we call a socio-technical approach. In other words, any strategy must be made up of an investment in people, process and technology to be successful. This is because data management involves more than the technical aspects of data storage, processing and analysis. It also includes the social aspects of data governance, change management, data quality management, user upskilling and collaboration between different teams. Organisations that know how to use technology the best will have an edge over their competitors.


Blockchain is one step away from mainstream adoption

Blockchain’s growth is already reshaping traditional business processes and models. In the financial sector, blockchain facilitates faster and more secure transactions. Supply chain management benefits from increased transparency and traceability, ensuring the authenticity and integrity of products. Smart contracts automate and streamline complex agreements, minimizing the risk of fraud and error. And in addition to sparking rising trading volumes, the SEC’s approval of spot bitcoin ETFs sent a global signal of validation to governments reviewing the viability of blockchain applications in both the private and public sectors. Importantly, the evolution of blockchain has given credence to — and bestowed practicality upon — the concept of decentralized finance (DeFi). We’re already in a reality where traditional financial services are replicated, and even improved, using blockchain technology. This is transformative because it will eliminate the need for intermediaries, opening the door to financial participation for virtually anyone with internet access. This democratization of finance has the potential to provide financial services to underserved populations and redefine the global financial landscape.


Biometrics Regulation Heats Up, Portending Compliance Headaches

What this all means is that it will be complicated for companies doing business nationally because they will have to audit their data protection procedures and understand how they obtain consumer consent or allow consumers to restrict the use of such data and make sure they match the different subtleties in the regulations. Contributing to the compliance headaches: The executive order sets high goals for various federal agencies in how to regulate biometric information, but there could be confusion in terms of how these regulations are interpreted by businesses. For example, does a hospital's use of biometrics fall under rules from the Food and Drug Administration, Health and Human Services, the Cybersecurity and Infrastructure Security Agency, or the Justice Department? Probably all four. ... Meanwhile, AI-induced deepfake video impersonations by criminals that abuse biometric data like face scans are on the rise. Earlier this year, a deepfake attack in Hong Kong was used to steal more than $25 million, and there are certainly others who will follow as AI technology gets better and easier to use for producing biometric fakes. The conflicting regulations and criminal abuses could explain why consumer confidence in biometrics has taken a nosedive.


The Role of Data in Crafting Personalized Customer Journeys

Through comprehensive customer profiles, data is sourced from multiple touchpoints in silos such as online visitors, purchases done, forms, customer support units, social media engagement, mobile app usage, and other channels as recognized in the CRM system. This further facilitates real-time data processing and identifies customer behaviors and preferences. As briefly discussed previously, predictive analytics consumes historical customer data and powers forecasting of expected behaviors and preferences. This segments data based on different parameters such as demographics, behaviors, preferences, etc. Ultimately, it acts as the seed for planting responsive marketing campaigns. While we are at it, an important strategy is cross-channel integration. Given the scale of marketing landscape, it is important to consider all channels and systems. So, the data collected from multiple sources is then integrated and analyzed through data management platforms to create a cross-channel, unified 360 view. Such interoperability delivers an omnichannel experience, thereby increasing their lifetime value. To ensure better customer loyalty, implement practices in alignment with the regulations. 


Checkout Lessons: What Banks Need to Borrow from eCommerce

eCommerce has much to teach the financial and healthcare industries, which also experience high seasonality and peak traffic periods. Events like 401(k) sign-ups, healthcare enrollments, and tax days are notorious for bringing down systems. In my experience, performance is synonymous with user experience. ... Many digital-first banks don’t operate physical branches. Their success is due to a singular focus on user experience, performance, speed, flexibility, and a mobile-first approach. This is what has won over the current generation of young people who do not need to visit a teller. It’s crucial for banks to recognize the importance of these advancements and to take action. Otherwise, they risk losing their competitive edge. In the U.S., some banks perform exceptionally well with only an online presence, with USAA as a prime example. Some companies, like Capital One, are innovating by transforming their banks into cafés. They provide WiFi, allowing customers to work and do more than just banking. This shift dramatically enhances the user experience.


Fintech at its Finest: Adding Value with Innovation

The best fintech platforms are constantly listening to their customers. Whether that’s through harnessing the power of AI to create an optimal user experience or continuously innovating based on customer feedback, a good fintech is creating exactly what its customers want and need. ... The best fintech platforms have innovative technologies at their core and are increasingly harnessing AI and machine learning to enhance their services. But crucially, they are also designed to be intuitive for users. After all, businesses have just 10 minutes to set up digital accounts or risk losing consumer trust. Millennials and Gen Z make up a significant part of fintech’s core market, so it’s providers who can cater to tech-savvy generations and prioritise smooth customer experiences that will differentiate themselves in an increasingly crowded market. ... In the bustling world of fintech, the top platforms set themselves apart by cleverly blending practices to ensure they keep growing and succeed – even when faced with challenges. These platforms develop excellent solutions, using technologies like blockchain, AI and fancy data analytics to tackle old financial problems and improve user experiences. 


Enabling Developers To Become (More) Creative

What influence does collaboration have on creativity? Now we are starting to firmly tread into management territory! Since software engineering happens in teams, the question becomes how to build a great team that's greater than the sum of its parts. There are more than just a few factors that influence the making of so-called "dream teams". We could use the term "collective creativity" since, without a collective, the creativity of each genius would not reach as far. The creative power of the individual is more negligible than we dare to admit. We should not aim to recruit the lone creative genius, but instead try to build collectives of heterogeneous groups with different opinions that manage to push creativity to its limits. ... Managers can start taking simple actions towards that grand goal. For instance, by helping facilitate decision-making, as once communication goes awry in teams, the creative flow is severely impeded. Researcher Damian Tamburri calls this problem "social debt." Just like technical debt, when there's a lot of social debt, don't expect anything creative to happen. Managers should act as community shepherds to help reduce that debt.



Quote for the day:

"A real entrepreneur is somebody who has no safety net underneath them." -- Henry Kravis

Daily Tech Digest - March 01, 2024

Why Large Language Models Won’t Replace Human Coders

Are any of these GenAI tools likely to become substitutes for real programmers? Unless the accuracy of coding answers supplied by models increases to within an acceptable margin of error (i.e 98-100%), then probably not. Let’s assume for argument’s sake, though, that GenAI does reach this margin of error. Does that mean the role of software engineering will shift so that you simply review and verify AI-generated code instead of writing it? Such a hypothesis could prove faulty if the four-eyes principle is anything to go by. It’s one of the most important mechanisms of internal risk control, mandating that any activity of material risk (like shipping software) be reviewed and double-checked by a second, independent, and competent individual. Unless AI is reclassified as an independent and competent lifeform, then it shouldn’t qualify as one pair of eyes in that equation anytime soon. If there’s a future where GenAI becomes capable of end-to-end development and building Human-Machine Interfaces, it’s not in the near future. LLMs can do an adequate job of interacting with text and elements of an image. There are even tools that can convert web designs into frontend code.


The future of farming

SmaXtec’s solution requires cows to swallow what the company calls a “bolus” - a small device that consists of sensors to measure a cow’s pH and temperature, an accelerometer, and a small processor. “It sits inside the cow and constantly measures very important body health parameters, including temperature, the amount of water intake, the drinking volume, the activity of the animal, and the contraction of the rumen in the dairy cow,” Scherer said. Rumination is a process of regurgitation and re-digestion. “You could almost envision this as a Fitbit for cows,” he said, adding that by constantly measuring those parameters at a high density - short timeframes with high robustness and high accuracy - SmaXtec can make assessments about potential diseases that are about to break out. ... Small Robot Company is known for its Tom robot. Tom - the robot - distantly recalls memories of Doctor Who’s dog K9. The device wheels itself up and down fields, capturing images and mapping out the land. The data is then taken from Tom’s SSD and uploaded to the cloud, where an AI identifies the different plants and weeds, and provides a customized fertilizer and herbicide plan for the crops.


The CISO: 2024’s Most Important C-Suite Officer

Short- and long-term solutions to navigating increased regulatory and plaintiff bar scrutiny start with the CISO. Cybersecurity defense strategies, implementation and monitoring fall under the purview of the CISO, who must closely coordinate with other members of the C-suite as well as boards of directors. Recent lawsuits highlight individual fiduciary liability for cybersecurity controls and accurate disclosures. Individual liability demands increased knowledge of, participation in and shared ownership of cybersecurity defense decisions. Gone are the days when liability risks could be eliminated by placing the blame on a single security officer. Boards and other C-suite executives now have personal risks over company cybersecurity defenses and preparedness. CISOs carry primary ownership for formulating and maintaining robust cybersecurity defenses and preparedness. This starts with implementing secure by design and other leading security frameworks. It extends to effective real-time threat monitoring and continual technology assessment of company capabilities to defend against advanced cyber threats or the “Defining Threat of Our Time.”


Generative AI and the big buzz about small language models

LLMs can create a wide array of content from text and images to audio and video, with multimodal systems emerging to handle more than one of the above tasks. They process massive amounts of information to execute natural language processing (NLP) tasks that approximate human speech in response to prompts. As such, they are ideal for pulling from vast amounts of data to generate a wide range of content, as well as conversational AI tasks. This requires a significant number of servers, storage and the all-too-scarce GPUs that power the models — at a cost some organizations are unwilling or unable to bear. It’s also tough to satisfy ESG requirements when LLMs hog compute resources for training, augmenting, fine-tuning and other tasks organizations require to hone their models. In contrast, SLMs consume fewer computing resources than their larger brethren and provide surprisingly good performance — in some cases on par with LLMs depending on certain benchmarks. They’re also more customizable, allowing organizations to execute specific tasks. For instance, SLMs may be trained on curated data sets and run through retrieval-augmented generation (RAG) that help refine search. For many organizations, SLMs may be ideal for running models on premises.


Captive centers are back. Is DIY offshoring right for you?

Captive centers are no longer just means of value creation, providing cost savings and driving process standardization. They are driving organization-wide innovation, facilitating digital transformations, and contributing to revenue growth. Unlike earlier generations of what are increasingly being called “global capabilities centers,” which tended to be large operations set up by multinationals, more than half of last year’s new centers were launched by first-time adopters — and on the smaller side, with less than 250 full-time employees; in some cases, less than 50. The desire to build internal IT capabilities amid a tight talent market is at the heart of the trend. As companies have grown comfortable with offshore and nearshore delivery, the captive model offers the opportunity to tap larger populations of lower-cost talent without handing the reins to a third party. “Eroding customer satisfaction with outsourcing relationships — per some reports, at an all-time low — has caused some companies to opt to ‘do it themselves,’” says Dave Borowski, senior partner, operations excellence, at West Monroe. What’s more, establishing up a captive center no longer needs to be entirely DIY. 


Questioning cloud’s environmental impact

Contrary to popular belief, cloud computing is not inherently green. Cloud data centers require a lot of energy to power and maintain their infrastructure. That should be news to nobody. Cloud is becoming the largest user of data center space, perhaps only to be challenged by the growth of AI data centers, which are becoming a developer’s dream. But wait, don’t cloud providers use solar and wind? Although some use renewable energy, not all adopt energy-efficient practices. Many cloud services rely on coal-fired power. Ask cloud providers which data centers use renewable. Most will provide a non-answer, saying their power types are complex and ever-changing. I’m not going too far out on a limb in stating that most use nonrenewable power and will do so for the foreseeable future. The carbon emissions from cloud computing largely stem from the power consumed by the providers’ platforms and the inefficiencies embedded within applications running on these platforms. The cloud provider itself may do an excellent job in building a multitenant system that can provide good optimization for the servers they run, but they don’t have control over how well their customers leverage these resources.


Revolutionizing Real-Time Data Processing: The Dawn of Edge AI

For effective edge computing, efficient and computationally cost-effective technology is needed. One promising option is reservoir computing, a computational method designed for processing signals that are recorded over time. It can transform these signals into complex patterns using reservoirs that respond nonlinearly to them. In particular, physical reservoirs, which use the dynamics of physical systems, are both computationally cost-effective and efficient. However, their ability to process signals in real time is limited by the natural relaxation time of the physical system. This limits real-time processing and requires adjustments for best learning performance. ... Recently, Professor Kentaro Kinoshita, and Mr. Yutaro Yamazaki developed an optical device with features that support physical reservoir computing and allow real-time signal processing across a broad range of timescales within a single device. Speaking of their motivation for the study, Prof. Kinoshita explains: “The devices developed in this research will enable a single device to process time-series signals with various timescales generated in our living environment in real-time. In particular, we hope to realize an AI device to utilize in the edge domain.”


Agile software promises efficiency. It requires a cultural shift to get right

The end result of these fake agile practices is lip service and ceremonies at the expense of the original manifesto’s principles, Bacon said. ... To get agile right, Wickham recommended building on situations in your organization where agile is practiced relatively effectively. Most often, that involves teams building internal tools, such as administrative panels for customer support or CI/CD pipelines. Those use cases have more tolerance for “let’s put something up, ask for feedback, iterate, repeat,” he said. After all, internal customers expect to accept seeing something that’s initially imperfect. “This indicates to me that people comprehend agile and have at least a baseline understanding of how to use it, but a lack of willingness to use it as defined when it comes to external customers,” said Wickham. ... “Agile is an easy term to toss around as a ‘solution,’” Richmond said. “But effective agile does not have a cookie-cutter solution to improving execution.” Getting it right requires a focus on what has to happen to understand the company’s challenges, how those challenges manifest out of the business environment, in what way those challenges impact business outcomes, and then, finally, identifying how to apply agile concepts to the business.


Building a Strong Data Culture: A Strategic Imperative

Effective executive backing is crucial for prioritizing and financing data initiatives that help cultivate an organization’s data-centric culture. Initiatives such as data literacy programs equip employees with vital data skills that are fundamental to fostering such a culture. Nonetheless, these programs often fail to thrive without the robust support of leadership. Results from the same Alation research show that only 15 percent of companies with moderate or weak data leadership integrate data literacy across most departments or throughout the entire organization. This is in stark contrast to the 61 percent adoption rate in companies with strong data leadership. Moreover, strong data leadership involves more than just endorsement; it requires executives to actively engage and set an example in data culture initiatives. For instance, when an executive carves out time from her hectic schedule to partake in data literacy training, it conveys a much more powerful message to her team than if she were to simply instruct others to prioritize such training. This hands-on approach by leaders underscores the importance of data literacy and demonstrates their commitment to embedding a data-driven culture in the organization.


Cybercriminals harness AI for new era of malware development

Threat actors have already shown how AI can help them develop malware only with a limited knowledge of programming languages, brainstorm new TTPs, compose convincing text to be used in social engineering attacks, and also increase their operational productivity. Large language models such as ChatGPT remain in widespread use, and Group-IB analysts have observed continued interest on underground forums in ChatGPT jailbreaking and specialized generative pre-trained transformer (GPT) development, looking for ways to bypass ChatGPT’s security controls. Group-IB experts have also noticed how, since mid-2023, four ChatGPT-style tools have been developed for the purpose of assisting cybercriminal activity: WolfGPT, DarkBARD, FraudGPT, and WormGPT – all with different functionalities. FraudGPT and WormGPT are highly discussed tools on underground forums and Telegram channels, tailored for social engineering and phishing. Conversely, tools like WolfGPT, focusing on code or exploits, are less popular due to training complexities and usability issues. Yet, their advancement poses risks for sophisticated attacks.



Quote for the day:

"It takes courage and maturity to know the difference between a hoping and a wishing." -- Rashida Jourdain