Daily Tech Digest - February 29, 2024

Why governance, risk, and compliance must be integrated with cybersecurity

Incorporating cybersecurity practices into a GRC framework means connected teams and integrated technical controls for the University of Phoenix, where GRC and cybersecurity sit within the same team, according to Larry Schwarberg, the VP of information security. At the university, the cybersecurity risk management framework is primarily created out of a consolidated view of NIST 800-171 and ISO 27001 standards, with this being used to guide other elements of its overall posture. “The results of the risk management framework feed other areas of compliance from external and internal auditors,” Schwarberg says. The cybersecurity team works closely with legal and ethics, compliance and data privacy, internal audit and enterprise risk functions to assess overall compliance with in-scope regulatory requirements. “Since our cybersecurity and GRC roles are combined, they complement each other and the roles focus on evaluating and implementing security controls based on risk appetite for the organization,” Schwarberg says. The role of leadership is to provide awareness, communication, and oversight to teams to ensure controls have been implemented and are effective. 


India's talent crunch: Why choose build approach over buying?

The primary challenge is the need for more workers equipped with digital skill sets. Despite the high demand for these skills, the current workforce needs to gain the requisite abilities, especially considering the constant evolution of technology. The lack of niche skill sets essential for working with advanced technologies like AI, blockchain, cloud, and data science further contributes to this gap. The turning point, however, is now within reach as businesses and professionals recognise the crucial need for upskilling and reskilling. At DXC India, we have embraced a strategy that prioritises internal talent development, favouring the 'build' approach over the 'buy' strategy. By upskilling our existing workforce with relevant, in-demand skills, we address our talent needs and foster individual career growth. This method is particularly effective as experienced employees can swiftly acquire new skills and undergo cross-training. This agility is an asset in navigating the rapidly evolving business landscape, benefiting employees and customers. Identifying the specific talent required and subsequently building that talent pool forms the crux of this strategy.


Why does AI have to be nice? Researchers propose ‘Antagonistic AI’

“There was always something that felt off about the tone, behavior and ‘human values’ embedded into AI — something that felt deeply ingenuine and out of touch with our real-life experiences,” Alice Cai, co-founder of Harvard’s Augmentation Lab and researcher at the MIT Center for Collective Intelligence, told VentureBeat. She added: “We came into this project with a sense that antagonistic interactions with technology could really help people — through challenging [them], training resilience, providing catharsis.” But it also comes from an innate human characteristic that avoids discomfort, animosity, disagreement and hostility. Yet antagonism is critical; it is even what Cai calls a “force of nature.” So, the question is not “why antagonism?,” but rather “why do we as a culture fear antagonism and instead desire cosmetic social harmony?,” she posited. Essayist and statistician Nassim Nicholas Taleb, for one, presents the notion of the “antifragile,” which argues that we need challenge and context to survive and thrive as humans. “We aren’t simply resistant; we actually grow from adversity,” Arawjo told VentureBeat.


How companies can build consumer trust in an age of privacy concerns

Aside from reworking the way they interact with customers and their data, businesses should also tackle the question of personal data and privacy with a different mindset – that of holistic identity management. Instead of companies holding all the data, holistic identity management offers the opportunity to “flip the script” and put the power back in the hands of consumers. Customers can pick and choose what to share with businesses, which helps build greater trust. ... Greater privacy and greater personalization may seem to be at odds, but they can go hand in hand. Rethinking their approach to data collection and leveraging new methods of authentication and identity management can help businesses create this flywheel of trust with customers. This will be all the more important with the rise of AI. “It’s never been cheaper or easier to store data, and AI is incredibly good at going through vast amounts of data and identifying patterns of aspects that actual humans wouldn’t even be able to see,” Gore explains. “If you take that combination of data that never dies and the AI that can see everything, that’s when you can see that it’s quite easy to misuse AI for bad purposes. ...”


Testing Event-Driven Architectures with Signadot

With synchronous architectures, context propagation is a given, supported by multiple libraries across multiple languages and even standardized by the OpenTelemetry project. There are also several service mesh solutions, including Istio and Linkerd, that handle this type of routing perfectly. But with asynchronous architectures, context propagation is not as well defined, and service mesh solutions simply do not apply — at least, not now: They operate at the request or connection level, but not at a message level. ... One of the key primitives within the Signadot Operator is the routing key, an opaque value assigned by the Signadot Service to each sandbox and route group that’s used to route requests within the system. Asynchronous applications also need to propagate routing keys within the message headers and use them to determine the workload version responsible for processing a message. ... This is where Signadot’s request isolation capability really shows its utility: This isn’t easily simulated with a unit test or stub, and duplicating an entire Kafka queue and Redis cache for each testing environment can create unacceptable overhead. 


The 7 Rs of Cloud Migration Strategy: A Comprehensive Overview

With the seven Rs as your compass, it’s time to chart your course through the inevitable challenges that arise on any AWS migration journey. By anticipating these roadblocks and proactively addressing them, you can ensure a smoother and more successful transition to the cloud. ... Navigating the vast and ever-evolving AWS ecosystem can be daunting, especially for organizations with limited cloud experience. This complexity, coupled with a potential skill gap in your team, can lead to inefficient resource utilization, suboptimal architecture choices, and delayed timelines. ... Migrating sensitive data and applications to the cloud requires meticulous attention to security protocols and compliance regulations. Failure to secure your assets can lead to data breaches, reputational damage, and hefty fines. ... While leveraging the full range of AWS services can offer significant benefits, over-reliance on proprietary solutions can create an unhealthy dependence on a single vendor. This can limit your future flexibility and potentially increase costs. ... While AWS offers flexible pricing models and optimization tools, managing cloud costs effectively requires ongoing monitoring and proactive adjustments.


What is a chief data officer? A leader who creates business value from data

The chief data officer (CDO) is a senior executive responsible for the utilization and governance of data across the organization. While the chief data officer title is often shortened to CDO, the role shouldn’t be confused with chief digital officer, which is also frequently referred to as CDO. ... Although some CIOs and CTOs find CDOs encroach on their turf, Carruthers says the boundaries are distinct. CDOs are responsible for areas such as data quality, data governance, master data management, information strategy, data science, and business analytics, while CIOs and CTOs manage and implement information and computer technologies, and manage technical operations, respectively. ... The chief data officer is responsible for the fluid that goes in the bucket and comes out; that it goes to the right place, and that it’s the right quality and right fluid to start with. Neither the bucket nor the water work without each other. ... Gomis says he’s seen chief data officers come from marketing backgrounds, and that some are MBAs who’ve never worked in data analytics before. “Most of them have failed, but the companies that hired them felt that the influencer skillset was more important than the data analytics skillset,” he says.


The UK must become intentional about data centers to meet its digital ambitions

For the UK to maintain its leadership position in DC’s, it’s not enough to just leave it to chance. A number of trends are now deciding investment flows both within the UK and on the global stage. First, land and power availability. Access to land and power is becoming increasingly constrained in London and surrounding areas. For example, properties in Slough have gone up by 44 percent since 2019, and the Greater London Authority has told some developers there won’t be electrical capacity to build in certain areas of the city until 2035. Data centers use large quantities of electricity, the equivalent of towns or small cities, in some cases, to power servers and ensure resilience in service. In West London, Distribution Network Operators have started to raise concerns about the availability of powerful grid supply points to meet the rapid influx of requests from data center operators wanting to co-locate adjacent to fiber optic cables that pass along the M4 corridor, and then cross the Atlantic. In response to these power and space concerns, the hyperscalers have already started to favor countries in Scandinavia. 


Rubrik CIO on GenAI’s Looming Technical Debt

This is a case of, “Hey, there’s a leak in the boat, and what are you going to do about it? Are you going to let things get drowned? Or are you going to make sure that there is an equal amount of water that leaves the boat?” So, you have to apply that thinking to your annual plan. Typically, I’ll say that there’s going to be a percentage of resources, budget, and effort I’m going to put into reducing tech debt … And that’s where you start competing with other business initiatives. You will have a bunch of business stakeholders that might look at that as something that should just be kicked down the road because they want to use that funding for something else. That’s where, I believe, educating a lot of my business leaders on what that does to the organization. When I don’t address that tech debt, on a regular basis, production SLAs start to deteriorate. ... There’s going to be some consolidation and some standardization across the board. So, the first couple of years are going to be rocky very everybody. But that doesn’t scare us, because we’re going to put a more robust governance on top of this new area. We need to have a lot more debates about this internally and say, “Let’s be cautious, guys. Because this is coming from all sides.”


How organizations can navigate identity security risks in 2024

IT, identity, cloud security and SecOps teams need to collaborate around a set of security and lifecycle management processes to support business objectives around security, timely access delivery and operational efficiency. These processes are best optimized by automating manual tasks, while ensuring that the ownership and accountability for manual tasks is well understood. In addition, quantifying and tracking business outcomes in terms of metrics highlights IAM’s effectiveness and identifies areas that need improvement or more automation. Utilizing IAM for cloud and Software as a Service (SaaS) applications introduces a spectrum of challenges, rooted in silos of identity. Each system or application has its own identity model and its own concept of various identity settings and permissions: accounts, credentials, groups, roles, entitlements and other access policies. Misconfigured permissions and settings heighten the likelihood of data breaches. To address these complexities, organizations need business users and security teams to collaborate on an identity management and governance framework and overarching processes for policy-based authentication, SSO, lifecycle management, security and compliance. Automation can streamline these processes and help ensure effective access controls.



Quote for the day:

“People may hear your voice, but they feel your attitude.” -- John Maxwell

Daily Tech Digest - February 28, 2024

3 guiding principles of data security in the AI era

Securing the AI: All AI deployments – including data, pipelines, and model output – cannot be secured in isolation. Security programs need to account for the context in which AI systems are used and their impact on sensitive data exposure, effective access, and regulatory compliance. Securing the AI model itself means identifying model risks, over-permissive access, and data flow violations throughout the AI pipeline. Securing from AI: Just like most new technologies, artificial intelligence is a double-edged sword. Cyber criminals are increasingly turning to AI to generate and execute attacks at scale. Attackers are currently leveraging generative AI to create malicious software, draft convincing phishing emails, and spread disinformation online via deep fakes. There’s also the possibility that attackers could compromise generative AI tools and large language models themselves. ... Securing with AI: How can AI become an integral part of your defense strategy? Embracing the technology for defense opens possibilities for defenders to anticipate, track, and thwart cyberattacks to an unprecedented degree. AI offers a streamlined way to sift through threats and prioritize which ones are most critical, saving security analysts countless hours. 


Web3 messaging: Fostering a new era of privacy and interoperability

Designed to be interoperable with various decentralized applications (DApps) and blockchain networks, Web3 messaging protocols enable developers to seamlessly integrate messaging functionality into their decentralized services — a stark contrast to their traditional equivalents that host closed ecosystems, which limit communication with users on other platforms. Beoble, a communication infrastructure and ecosystem that allows users to chat between wallets, is one of the Web3 messaging platforms ready to change how people use digital communication. The platform comprises a web-based chat application and a toolkit for seamless integration with DApps. Dubbed “WhatsApp for Web3,” Beoble removes the need for login methods like Twitter or Discord, instead mandating only a wallet for access. Users can log in using their wallets and send texts, images, videos, links and files across blockchain networks. Blockchain app users can utilize emojis and nonfungible token (NFT) stickers in their digital communication with Beoble, adding a layer of personality to their conversations. 


As data takes center stage, Codified wants to bring flexibility to governance

As Gupta sees it, many large companies are authoring policies and trying to implement them in various ways, but he sees software that is too rigid for today’s use cases, leaving them vulnerable, especially when they have to change policy. He wants to change that by translating policy into code that can be implemented in a variety of ways, connected to various applications that need access to the data, and easily changed when new customers or user categories come along. “We let you author policies in natural language, in a declarative way or using a UI - pick your favorite way - but when those policies are authored, we can codify them into something that can be implemented in a number of ways and can be very easily changed,” he said. To that end, the company also enables customers to set conditions, such as whether you’ve had security training in the last 365 days, or you’re already part of a team working on a sensitive project. Ultimately, this enables companies to set hard-coded data access rules based on who the employee is and the applications they are using or projects they are part of, rather than relying on creating groups on which to base these rules.


Looking Forward, Looking Back: A Quarter Century as a CISO

The first distributed denial of service (DDoS) attack occurred in 1999, followed by Code Red and Nimda worm cyberattacks that targeted web servers in 2001, and SQL Slammer in 2003 which spread rapidly and brought focus on the need to patch vulnerable systems. The end of the millennium also brought Y2K and the Millennium Bug, which exposed the vulnerability of existing computing infrastructures that formatted dates with only the final two digits and raised the profile of CISOs and other security professionals. Organizations recognized the necessity of dedicated executives responsible for managing cybersecurity risks. ... CISOs were soon making the news, and not always in a good way. Former Uber CISO Joe Sullivan was found guilty of felony obstruction of justice and concealing a data breach in October 2022. The following month, CISO Lea Kissner of Twitter (now X) resigned along with the company’s chief privacy officer and its chief compliance officer over concerns that Twitter’s new leadership was pushing for the release of products and platform changes without effective security reviews.


How Generative AI is Revamping Digital Transformation to Change How Businesses Scale

Crucially, generative AI can help to tailor the dining experience for customers in a way that significantly improves the quality of in-house or takeaway eating. This is achieved by GenAI models analyzing data like guest preferences, dietary restrictions, past orders, and behavior to offer personalized menu items and even recommend food and drink pairings. Generative AI will even be capable of using available datasets to generate offers on the fly as an instant call-to-action (CTA) if it deems an online visitor isn't yet ready to convert their interest into action. We're already seeing leading global restaurants announce the implementation of generative AI for their processes. ... Generative AI became the technological buzzword of 2023, and for good reason. However, there will be many hurdles to overcome in the development of the technology before it drives widespread digital transformation. Regulatory hurdles may be tricky to overcome due to issues in how AI programs can handle private data and utilize intellectual property (IP). Quality shortcomings could also cause issues in governance among early LLMs, and we've seen plenty of cases where language models "hallucinate" when dealing with unusual queries.


NIST CSF 2.0 released, to help all organizations, not just those in critical infrastructure

The CSF’s governance component emphasizes that cybersecurity is a major source of enterprise risk that senior leaders should consider alongside others, such as finance and reputation. “Developed by working closely with stakeholders and reflecting the most recent cybersecurity challenges and management practices, this update aims to make the framework even more relevant to a wider swath of users in the United States and abroad,” according to Kevin Stine, chief of NIST’s Applied Cybersecurity Division. ... The framework’s core is now organized around six key functions: Identify, Protect, Detect, Respond, and Recover, along with CSF 2.0’s newly added Govern function. When considered together, these functions provide a comprehensive view of the life cycle for managing cybersecurity risk. The updated framework anticipates that organizations will come to the CSF with varying needs and degrees of experience implementing cybersecurity tools. New adopters can learn from other users’ successes and select their topic of interest from a new set of implementation examples and quick-start guides designed for specific types of users...


Even LLMs need education—quality data makes LLMs overperform

Like any student, LLMs need a good source text to produce good outputs. As Satish Jayanthi of CTO and co-founder of Coalesce told us, “If there were LLMs in the 1700s, and we asked ChatGPT back then whether the earth is round or flat and ChatGPT said it was flat, that would be because that's what we fed it to believe as the truth. What we give and share with an LLM and how we train it will influence the output.” Organizations that operate in specialized domains will likely need to train or fine-tune LLMs of specialized data that teaches those models how to understand that domain. Here at Stack Overflow, we’re working with our Teams customers to incorporate their internal data into GenAI systems. When Intuit was ramping up their GenAI program, they knew that they needed to train their own LLMs to work effectively in financial domains that use tons of specialized language. And IBM, in creating an enterprise-ready GenAI platform in watsonx, made sure to create multiple domain-aware models for code, geospatial data, IT events, and molecules.


State of FinOps 2024: Reducing Waste and Embracing AI

Engineers remain the biggest beneficiary of FinOps observability, even though "engineering enablement" has dropped to a lower position in the report's ranking of surveyed priorities. This indicates that engineers are those best suited to responding to a sudden change in cost metrics. The report observes that the "engineering persona" is reported as getting the most value from both "FinOps training and self-service reporting." ... While waste reduction is a common driver across all respondents, segmenting the survey by cloud spend revealed that those with smaller budgets would tend to then prioritise improvements in the accuracy of billing forecasts. The report states that these respondents faced the challenge of understanding "the trajectory of spending" prior to it "getting out of hand." Most invested in low-effort solutions such as "manual adjustments" to generated forecast data. In contrast, those with larger budgets tended to prioritise the optimisation of commitment-based discounts to benefit from economies of scale. This included the right-sizing of "reserved instances, savings plans, committed use discounts," as well as specific negotiated discounts.


How to Develop an Effective Governance Risk and Compliance Strategy

“Overcoming silos and fostering communication needs to begin at the top,” Rothaar, says in an email interview. Furthermore, aligning GRC goals with broader business objectives ensures both executive management and individual departments recognize the impact that GRC initiatives have on organizational success. “Promoting a culture of communication with open dialogue and knowledge-sharing is essential to a successful and efficient GRC strategy,” she says. Ringel says organizations need to promote awareness and engagement with risk and compliance, because they influence every member of the organization. “You are only as strong as your weakest link when it comes to risk, so making sure everyone is on the same page and treating risk and compliance smartly is key,” she explains. Compliance is less directly obvious, but if those values are not communicated through every department--product design, development, customer support, marketing, and sales -- the end product will reflect that disconnect. “Not every employee needs to know specific regulations, but everyone needs to share the values of data governance and compliance,” Ringel says.


Data storage problems and how to fix them

When undertaking the journey to digitisation, it’s important to consider the issues and challenges and more importantly – know how to avoid them. ... It’s wise not to attempt a massive data overhaul all at once, especially before you’ve considered what data is valuable, how and where you will store the data and investigated the different options and models available. It all depends on the scope of transformation and the state the organisation is in. For start-ups, it’s a green field and the experience is as good as the plan and its periodic inspection and adaptation. For organisations with historic data to migrate, it can get complex. I have experienced both and the key was to have identified what data is valuable, a clear cut off date and policy on how far back we digitise. ... If you are unsure on where to start, consult an expert to determine the best solutions and view the initial costs as an investment. Digital transformation of data brings the benefits of creating efficiency and timesaving and with those, reduced costs. The long-term benefit can far outweigh the upfront costs. Digital systems are typically faster and more efficient than manual systems. 



Quote for the day:

"Nothing is so potent as the silent influence of a good example." -- James Kent

Daily Tech Digest - February 27, 2024

Market incentives in the pursuit of resilient software and hardware

For cyber security to continue to evolve as a discipline, we need both quantitative and qualitative insights to understand those aspects that, when combined, work most effectively to address threat and risk, along with human factors and operational dimensions. These solutions then need to be coupled with a compelling narrative to explain our conclusions and objectives to a range of audiences. For the quantitative aspects, access to underlying data types and sources is critical. When we think about software and hardware specifically, there are many possible points of measurement which can contribute to our understanding of its intrinsic security and support assurance. ... Improving the resilience of our software and hardware technology stacks in ways that can scale globally is a multi-faceted, sociotechnical challenge. Creating the right market incentives is our priority. Without these in place, we cannot begin to make progress at the pace or scale we need. Our collective interventions to improve engineering best practices and more transparent behaviours must be driven by data, and targeted by research and innovation. All of this requires better access to skills and cyber education, improved tools, and accessible infrastructure. 


Is creating an in-house LLM right for your organization?

Before delving into the world of foundational models and LLMs, take a step back and note the problem you are looking to solve. Once you identify this, it’s important to determine which natural language tasks you need. Examples of these tasks include summarization, named entity recognition, semantic textual similarity, and question answering, among others. ... Before using an AI tool as a service, government agencies need to make sure the service they are using is safe and trustworthy, which isn’t usually obvious and not captured by just looking at an example set of output. And while the executive order doesn’t apply to private sector businesses, these organizations should take this into consideration if they should adopt similar policies. ... Your organization’s data is the most important asset to evaluate before training your own LLM. Those companies that have accumulated high-quality data over time are the luckiest in today’s LLM age, as data is needed at almost every step of the process including training, testing, re-training, and beta tests. High-quality data is the key to success when training an LLM, so it is important to consider what that truly means. 


Privacy Watchdog Cracks Down on Biometric Employee Tracking

In Serco's case, the ICO said Friday that the company had failed to demonstrate why using facial recognition technology and fingerprint scanning was "necessary or proportionate" and that by doing so it had violated the U.K. General Data Protection Regulation. "Biometric data is wholly unique to a person so the risks of harm in the event of inaccuracies or a security breach are much greater - you can't reset someone's face or fingerprint like you can reset a password," said U.K. Information Commissioner John Edwards. "Serco Leisure did not fully consider the risks before introducing biometric technology to monitor staff attendance, prioritizing business interests over its employees' privacy." "There have been a number of warnings that facial recognition and fingerprints are problematic," said attorney Jonathan Armstrong, a partner at Cordery Compliance. "Most data protection regulators don't like technology like this when it is mandatory for employees. If you're looking at this you'll need a solid data protection impact assessment setting out why the tech is needed, why there are no better solutions, and what you're doing to minimize the impact on those affected.


Cloud providers should play by same rules as telcos, EU commissioner tells MWC

“Currently, our regulatory framework is too fragmented. We are not making the most of our single market of 450 million potential customers. We need a true digital single market to facilitate the emergence of pan-European operators with the same scale and business opportunities as their counterparts in other regions of the world. And we need a true level playing field, because in a technological space where telecommunications and cloud infrastructures converge, there is no justification for them not to play by the same rules,” said the European Commissioner. This means, for Breton, “similar rights and obligations for all actors and end-users of digital networks. This means, first and foremost, establishing the ‘country of origin’ principle for telecoms infrastructure services, as is already the case for the cloud, to reduce compliance costs and investment requirements for pan-European operators.” ... Finally, Breton advocated “Europeanizing the allocation of licenses for the use of spectrum. In the technology race to 6G, we cannot afford any more delays in the concession process, with huge disparities in the timing of auctions and infrastructure deployment between Member States...”


Unlocking the Power of Automatic Dependency Management

Dependency automation relies on having a robust and reliable CI/CD system. Integrating automatic dependency updates into the development workflow is going to exercise this system much more frequently than updates done by hand, so this process demands robust testing and continuous integration practices. Any update, while beneficial, can introduce unexpected behaviors or compatibility issues. This is where a strong CI pipeline comes into play. By automatically testing each update in a controlled environment, teams can quickly identify and address any issues. Practices like automated unit tests, integration tests and even canary deployments are invaluable. They act as a safety net, ensuring that updates improve the software without introducing new problems. Investing in these practices streamlines the update process, but also reinforces overall software quality and reliability. ... Coupled with a robust infrastructure that supports these tools, including adequate server capacity and a reliable network, organizations can create an environment where automatic dependency updates thrive, contributing to a more resilient and agile development process.


What Is a Good Management Model in Agile Software Development?

Despite that recognition, an approach referred to by Jurgen Appello as “Management 2.0,” or “doing the right thing wrong” is still being used. This management style involves a manager who sticks strictly to the organizational hierarchy and forgets that human beings usually don’t like top-down control and mandatory improvements. Within this approach, 1:1 meetings are conducted with employees for individual goal setting. Although this could be considered a good idea — to manage people and their interests — the key is the way managers do it. They should be managing the system around their people instead of managing the people directly. ... Management 3.0, or “Doing the right thing,” can be the appropriate solution, in which organizations are considered to be complex and adaptive systems. Jurgen Appelo describes this style of management as “taking care of the system instead of manipulating the people.” Or, in other words, improving the environment so that “it keeps workers engaged and happy is one of the main responsibilities of management; otherwise, the organization fails to generate value.”


Hacker group hides malware in images to target Ukrainian organizations

The attacks detected by Morphisec delivered a malware loader known as IDAT or HijackLoader that has been used in the past to deliver a variety of trojans and malware programs including Danabot, SystemBC, and RedLine Stealer. In this case, UAC-0184 used it to deploy a commercial remote access trojan (RAT) program called Remcos. “Distinguished by its modular architecture, IDAT employs unique features like code injection and execution modules, setting it apart from conventional loaders,” the Morphisec researchers said. “It employs sophisticated techniques such as dynamic loading of Windows API functions, HTTP connectivity tests, process blocklists, and syscalls to evade detection. The infection process of IDAT unfolds in multiple stages, each serving distinct functionalities.” ... To execute the hidden payload, the IDAT loader employs another technique known as module stomping, where the payload is injected into a legitimate DLL file — in this case one called PLA.dll (Performance Logs and Alerts) — to lower the chances that an endpoint security product will detect it.


“Ruthlessly prioritize what’s critical”: Check Point expert on CISOs and the evolving attack surface

Ford argues that CISOs need to face the fact that they cannot secure everything and question how they can best spend their finite resources on attack surface management. This attitude has been reflected in the rise of strategies such as zero trust and Ford says in 2024 CISOs will continue to struggle to secure an increasing number of devices and data and contend with a landscape that is evolving in real time. “I think you have to do two things really well: the first thing I think you have to do is truly identify what’s critical and ruthlessly prioritize what’s critical. The second thing is you have to deploy lasting and intelligent solutions”, Ford argued. “[Businesses] have to deploy solutions that grow and contract with the business and can grow and contract as the threat landscape grows and contracts.” Mitchelson offers some examples of what this sort of deployment might look like in the future, arguing the most potential lies in using technology to realize this elastic functionality. “Internally within the structures of the organization, it could be a matrix type structure whereby you’re actually able to expand and contract internal resourcing within teams as to what you do”, Mitchelson suggests.


Gartner Identifies the Top Cybersecurity Trends for 2024

Security leaders need to prepare for the swift evolution of GenAI, as large language model (LLM) applications like ChatGPT and Gemini are only the start of its disruption. Simultaneously, these leaders are inundated with promises of productivity increases, skills gap reductions and other new benefits for cybersecurity. Gartner recommends using GenAI through proactive collaboration with business stakeholders to support the foundations for the ethical, safe and secure use of this disruptive technology. “It’s important to recognize that this is only the beginning of GenAI’s evolution, with many of the demos we’ve seen in security operations and application security showing real promise,” said ... Outcome-driven metrics (ODMs) are increasingly being adopted to enable stakeholders to draw a straight line between cybersecurity investment and the delivered protection levels it generates. According to Gartner, ODMs are central to creating a defensible cybersecurity investment strategy, reflecting agreed protection levels with powerful properties, and in simple language that is explainable to non-IT executives. 


Using AI to reduce false positives in secrets scanners

Secrets scanners were created to find leaks of such secrets before they reach malicious hands. They work by comparing the source code against predefined rules (regexes) that cover a wide range of secret types. Because they are rule-based, secrets scanners often trade between high false-positive rates on the one hand and low true-positive rates on the other. The inclination towards relaxed rules to capture more potential secrets results in frequent false positives, leading to alert fatigue among those tasked with addressing these alarms. Some scanners implement additional rule-based filters to decrease false alerts, like checking if the secret resides in a test file or whether it looks like a code variable, function call, CSS selection, etc., through semantic analysis. ... AI can play a role in overcoming this challenge. Large Language Model (LLM) can be directed at vast amounts of code and fine-tuned (trained) to understand the nuance of secrets and when they should be considered false-positive. Given a secret and the context in which it was introduced, this model would then know whether it should be flagged. Using this approach will reduce the number of false positives while keeping true positive rates stable.



Quote for the day:

''Leadership occurs any time you attempt to influence the thinking, development of beliefs of somebody else.'' -- Dr. Ken Blanchard

Daily Tech Digest - February 26, 2024

From deepfakes to digital candidates: AI’s political play

Deepfake technology uses AI to create or manipulate still images, video and audio content, making it possible to convincingly swap faces, synthesize speech, fabricate or alter actions in videos. This technology mixes and edits data from real images and videos to produce realistic-looking and-sounding creations that are increasingly difficult to distinguish from authentic content. While there are legitimate educational and entertainment uses for these technologies, they are increasingly being used for less sanguine purposes. Worries abound about the potential of AI-generated deepfakes that impersonate known figures to manipulate public opinion and potentially alter elections. ... Techniques like those used in deepfake technology produce highly realistic and interactive digital representations of fictional or real-life characters. These developments make it technologically possible to simulate conversations with historical figures or create realistic digital personas based on their public records, speeches and writings. One possible new application is that someone (or some group), will put forward an AI-created digital persona for public office. 


How data governance must evolve to meet the generative AI challenge

“With generative AI bringing more data complexity, organizations must have good data governance and privacy policies in place to manage and secure the content used to train these models,” says Kris Lahiri, co-founder and chief security officer of Egnyte. “Organizations must pay extra attention to what data is used with these AI tools, whether third parties like OpenAI, PaLM, or an internal LLM that the company may use in-house.” Review genAI policies around privacy, data protection, and acceptable use. Many organizations require submitting requests and approvals from data owners before using data sets for genAI use cases. Consult with risk, compliance, and legal functions before using data sets that must meet GDPR, CCPA, PCI, HIPAA, or other data compliance standards. Data policies must also consider the data supply chain and responsibilities when working with third-party data sources. “Should a security incident occur involving data that is protected within a certain region, vendors need to be clear on both theirs and their customers’ responsibilities to properly mitigate it, especially if this data is meant to be used in AI/ML platforms” says Jozef de Vries, chief product engineering officer of EDB.


Will AI Replace Consultants? Here’s What Business Owners Say.

“Most consultants aren’t actually that smart," said Michael Greenberg of Modern Industrialists. “They’re just smarter than the average person.” But he reckons the average machine is much smarter. “Consultants generally do non-creative tasks based around systematic analysis, which is yet another thing machines are normally better at than humans.” Greenberg believes some consultants, “doing design or user experience, will survive,” but “the run of the mill accounting degree turned business advisor will not.” Someone who has “replaced all of [her] consultants with ChatGPT already, and experienced faster growth,” is Isabella Bedoya, founder of MarketingPros.ai. However, she thinks because “most people don't know how to use AI, savvy consultants need to leverage it to become even more powerful, effective and efficient for their clients” and stay ahead of their game. Heather Murray, director at Beesting Digital, thinks the inevitable replacement of consultants is down to quality. “There are so many poor quality consultants that rely rigidly on working their clients through set frameworks, regardless of the individual’s needs. AI could do that easily.” 


Effective Code Documentation for Data Science Projects

The first step to effective code documentation is ensuring it’s clear and concise. Remember, the goal here is to make your code understandable to others – and that doesn’t just mean other data scientists or developers. Non-technical stakeholders, project managers, and even clients may need to understand what your code does and why it works the way it does. To achieve this, you should aim to use plain language whenever possible. Avoid jargon and overly complex sentences. Instead, focus on explaining what each part of your code does, why you made the choices you did, and what the expected outcomes are. If there are any assumptions, dependencies, or prerequisites for your code, these should be clearly stated. Remember, brevity is just as important as clarity. ... Data science projects are often dynamic, with models and data evolving over time. This means that your code documentation needs to be equally dynamic. Keeping your documentation up to date is critical to ensuring its usefulness and accuracy. A good practice here is to treat your documentation as part of your code, updating it as you modify or add to your code base.


Breaking down the language barrier: How to master the art of communication

Exactly how can cyber professionals go about improving their communication skills? According to Shapely, many people prefer to take short online learning courses. On-the-job coaching or mentorships are other popular upskilling strategies, providing quick and cost-effective practical learning opportunities. For those still early in their cybersecurity career, there is the option of building communication skills as part of a university degree. According to Kudrati, who teaches part-time at La Trobe University, many cybersecurity students must complete one subject on professional skills as part of their course. “This helps train students’ presentation skills, requiring them to present in front of lecturers and classmates as if they’re customers or business teams,” he says. Homing in on communication skills at university or early on in a cybersecurity professional’s career is also encouraged by Pearlson. In a study she conducted into the skills of cybersecurity professionals, she found that while communication skills were in demand, they were lacking, particularly among those in entry roles. 


4 core AI principles that fuel transformation success

Around 86% of software development companies are agile, and with good reason. Adopting an agile mindset and methodologies could give you an edge on your competitors, with companies that do seeing an average 60% growth in revenue and profit as a result. Our research has shown that agile companies are 43% more likely to succeed in their digital projects. One reason implementing agile makes such a difference is the ability to fail fast. The agile mindset allows teams to push through setbacks and see failures as opportunities to learn, rather than reasons to stop. Agile teams have a resilience that’s critical to success when trying to build and implement AI solutions to problems. Leaders who display this kind of perseverance are four times more likely to deliver their intended outcomes. Developing the determination to regroup and push ahead within leadership teams is considerably easier if they’re perceived as authentic in their commitment to embed AI into the company. Leaders can begin to eliminate roadblocks by listening to their teams and supporting them when issues or fears arise. That means proactively adapting when changes occur, whether this involves more delegation, bringing in external support, or reprioritizing resources.


Don’t Get Left Behind: How to Adopt Data-Driven Principles

Culture change remains the biggest hurdle to data-driven transformation. The disruption inherent in this evolution can put off some key stakeholders, but a few common-sense steps can guide your organization to tackle it successfully. Read the room - Executive buy-in is crucial to building a data-driven culture. Leadership must get behind the move so the rank-and-file will dedicate the time and effort needed to make the pivot. Map the landscape - You can’t change what you don’t know. Start by assessing the state of the organization: find the gaps in the existing data infrastructure and forecast any future analytics needs so you can plan for them. Evaluate your options - Building business intelligence (BI) and artificial intelligence (AI) systems from scratch is labor- and resource-intensive. ... However, there’s no need to reinvent the wheel; consider leveraging managed services to deal with scale and adaptation issues and ask for guidance from your provider’s data architects and scientists. Think single-source - Fragmentation detracts from the usefulness of data and can mask insights that would be available with better visibility. Implement integrated platforms that provide secure and scalable data pipelines, storage, and insights from end to end.


It’s time for security operations to ditch Excel

Microsoft Excel and Google Sheets are excellent for balancing books and managing cybersecurity budgets. However, they’re less ideal for tackling actual security issues, auditing, tracking, patching, and mapping asset inventories. Surely, our crown jewels deserve better. And yet, security operation teams are drowning in multi-tab tomes that require constant manual upkeep. Using these spreadsheets requires security operations to chase down every team in their organization for input on everything from the mapping of exceptions and end-of-life of machines to tracking hardware and operating systems. This is the only way to gather the information required on when, why and how certain security issues or tasks must be addressed. It’s no wonder, then, that the column reserved for due dates is usually mostly red. This is an industry-wide problem plaguing even multinational enterprises with top CISOs. Even those large enough to have GRC teams still use Excel for upcoming audits to verify remediations, delegate responsibilities and keep track of compliance certifications.


How Leadership Missteps Can Derail Your Cloud Strategy

Cloud computing involves many moving parts working in unison; therefore, leadership must be clear and concise regarding their cloud strategies. Yet often they are not. The problems arise from not acknowledging the complexity inherent in moving to the cloud. It's not a simple plug-and-play transition, but one that requires modifications not only to technology but also to business processes and organizational culture. For these reasons, the scope of the project is easily underestimated. Underestimating the complexity of transitioning to cloud computing can lead to significant pitfalls. Inadequate staff training, lax security measures, and rushed vendor choices together are just the tip of the iceberg. These oversights, seemingly minor at first, can snowball into significant issues down the line. But there's another layer: the iceberg beneath the surface. Focusing merely on the initial outlay while overlooking ongoing operational costs is like ignoring the currents below, both can unexpectedly steer your budget -- and your company -- off course. Acknowledging and managing operational expenses is vital for a thorough and financially stable cloud computing strategy.


The Art of Ethical Hacking: Securing Systems in the Digital Age

Stressing the obvious differences between malicious hacking and ethical hacking is vital. Even though the strategies utilized could be comparative, ethical hacking is carried out with permission and aims to strengthen security. On the other hand, malicious hacking entails unlawful admittance to steal, disrupt, or manipulate data without authorization. Operating within moral and legal bounds, ethical hackers make sure that their acts advance cybersecurity measures as a whole. Ethical hacking is the term used to describe a legitimate attempt to obtain unauthorized access to a computer system, program, or information. Ethical hacking includes imitating the methods and actions of vicious attackers. By using this method, security vulnerabilities can be found and fixed before a malicious attack can make use of them. ... As everybody and organizations keep on depending on technology for everyday tasks and business operations, the role of ethical hacking in strengthening cybersecurity will only become more crucial. A safe digital environment can be the difference between one that is susceptible to potentially catastrophic cyberattacks and one that embraces ethical hacking as a proactive strategy. 



Quote for the day:

"Things work out best for those who make the best of how things work out." -- John Wooden

Daily Tech Digest - February 25, 2024

Orgs Face Major SEC Penalties for Failing to Disclose Breaches

"It's a company issue, definitely not just CISO issue. Everybody will be very leery about vetting statements — why should I say this? — without having legal give it their blessing ... because they are so worried about having charges against them for making a statement." The worries will add up to additional costs for businesses. Because of the additional liability, companies will have to have more comprehensive Directors and Officers (D&O) liability insurance that not only covers the legal expenses for a CISO to defend themselves, but also for their expenses during an investigation. Businesses who will not pay to support and protect their CISO may find themselves unable to hire for the position, while conversely, CISOs may have trouble finding supportive companies, says Josh Salmanson, senior vice president of technology solutions at Telos Corp., a cyber risk management firm. "We're going to see less people wanting to be CISOs, or people demanding much higher salaries because they think it may be a very short-term role until they 'get busted' publicly," he says. "The number of people that will have a really ideal environment with support from the company and the funding that they need will likely remain small."


Risk Management Strategies for Tech Startups

As you continue to grow, your risk management strategies will shift. One of the best things you can do as your startup gains traction is to develop a contingency plan. A contingency plan can keep things afloat if you run into an unexpected loss of customers, funding problems, or even a data disaster. Your contingency plan should include, first and foremost, strong cybersecurity practices. Cyberattacks happen with even the largest and most successful conglomerates. While you might not be able to completely stop cyber criminals from getting in, prioritizing protective measures and developing a response plan will make it easier for your business to bounce back if an attack happens. Things like using cloud-based backups, developing strong passwords and authentication practices, and educating your employees on how to keep themselves safe are all great ways to protect your business from hackers. A successful contingency plan should also cover unexpected accidents and incidents. If someone gets injured on the job or your company gets sued, a strong insurance plan needs to be in place to cover legal fees and damages. 


The Architect’s Contract

The architect is a business technology strategist. They provide their clients with ways to augment business with technology strategy in both localized and universal scales. They make decisions which augment the value output of a business model (or a mission model) by describing technology solutions which can fundamentally alter the business model. Some architects specialize in one or more areas of that. But the general data indicated that even pure business architects are called on to rely on their technical skills quite often, and the most technical software architects must have numerous business skills to be successful. ... Governance is not why architects get into the job. The ones that do are generally architect managers not competent architects themselves. All competent architects started out by making things. Proactive, innovation based teams create new architects constantly. Moving up to too high a level of scope makes it very hard to stay a practicing architect. It takes radical dedication to learning to be a real chief architect. Scope is one of the biggest challenges of our field as it is based on the concept of scarcity. Like having city planners ‘design’ homes or skyscrapers or cathedrals. 


Why DevOps is Key to Software Supply Chain Security

Organizations must also evaluate how well existing processes work to protect the business, then strategically add/subtract from there as needed. No matter what solutions are leveraged, more and different tools generate reams of more and different data. What’s important — and to whom? How do I manage the data? When can I trust it? Where do I store it? What problems does the new data help me solve? Organizations will need a way to effectively sift this information and deliver the right data to the right teams at the right time. To preserve the ability to quickly and continuously innovate, it will be important to focus on shifting security left as well as integrating automation whenever and wherever possible. As new security metadata becomes available, such as from SBOMs, new solutions for managing that metadata will be key. An open source initiative sponsored by Google, GUAC is designed to integrate software security information, including SBOMs, attestations and vulnerability data. Users can query the resulting GUAC graph to help answer key security concerns, including proactive, preventive and reactive concerns.


The Future of Computing: Harnessing Molecules for Sustainable Data Management

Molecular computing harnesses the natural propensity of molecules to form complex, stable structures, allowing for parallel processing – an important advantage that enables computational tasks to be performed simultaneously, a feat that current supercomputers can only dream of. Enzymes like polymerases can simultaneously replicate millions of DNA strands, each acting as a separate computing pathway. This capability translates to potential parallel processing operations in the order of 1015, dwarfing the 1010 operations per second of the fastest supercomputers. Energy efficiency is another game-changer. The energy profile of molecular computing is notably low. DNA replication in a test tube requires minimal energy, estimated at less than a millionth of a joule per operation, compared to the approximately 10-4 joules consumed by a typical transistor operation. This translates to a potential reduction in energy consumption by a factor of 105 or more, depending on the operation. To prove our point, training models like GPT-4 require tens of millions of kilowatt-hours; molecular computing could achieve similar results in a fraction of the time and with exponentially less energy.


Role of AI in Data Management Evolution – Interview with Rakesh Singh

Embracing AI-based solutions presents a challenge to organizations centered around governance and maintaining a firm grip on the overall processes. This challenge is particularly present in the financial sector, where maintaining control is not only a preference but a crucial necessity. Therefore, in tandem with the adoption of AI-driven solutions, a concerted emphasis must be placed on ensuring robust governance measures. For financial institutions, the imperative extends beyond the mere integration of AI; it encompasses a holistic commitment to upholding data security, enforcing comprehensive policies, safeguarding privacy, and adhering to stringent compliance standards. Recognizing that the implementation of AI introduces complexities and potential vulnerabilities, it becomes imperative to establish a framework that not only facilitates the effective utilization of AI but also fortifies the organization against risks. In essence, the successful adoption of AI in the financial domain necessitates a dual focus – one on leveraging the transformative potential of AI solutions and the other on erecting a resilient governance structure.


Ransomware Operation LockBit Reestablishes Dark Web Leak Site

Law enforcement agencies behind the takedown, acting under the banner of "Operation Cronos," suggested they would reveal on Friday the identity of LockBit leader LockBitSupp - but did not. "We know who he is. We know where he lives. We know how much he is worth. LockBitSupp has engaged with Law Enforcement :)," authorities instead wrote on the seized leak site. "LockBit has been seriously damaged by this takedown and his air of invincibility has been permanently pierced. Every move he has taken since the takedown is one of someone posturing, not of someone actually in control of the situation," said Allan Liska, principal intelligence analyst, Recorded Future. The re-established leak site includes victim entries apparently made just before Operation Cronos executed the takedown, including one for Fulton County, Ga. LockBit previously claimed responsibility for a January attack that disrupted the county court and tax systems. County District Attorney Fani Willis is pursing a case against former President Donald Trump and 18 co-defendants for allegedly attempting to stop the transition of presidential power in 2020.


Toward Better Patching — A New Approach with a Dose of AI

By default, the NIST operated National Vulnerability Database (NVD) is the source of truth for CVSS scores. But NVD gets its entries from the CVE database, and if there is no completed CVE entry, there is no NVD entry — and therefore no immediately trusted and verifiable CVSS score. Despite this, security teams use whatever CVSS they are told as a primary factor in their vulnerability patch triaging — the higher the score, the greater the perceived likelihood of exploitation with a greater potential for harm – and it is likely to be a score applied by the vulnerability researcher. There is an inevitable delay and confusion (due to ‘responsible disclosure’, possible delays in posting to the CVE database, and an element of subjectivity in the CVSS score). “The delay in CVE scoring often means that defenders face two uphill battles regarding vulnerability management. First, they need a prioritization method to determine which of the thousands of CVEs published each month they should patch,” notes Coalition. “Second, they must patch these CVEs before a threat actor leverages them to target their organization.”


Apple Beefs Up iMessage With Quantum-Resistant Encryption

"To our knowledge, PQ3 has the strongest security properties of any at-scale messaging protocol in the world," Apple's SEAR team explained in a blog post announcing the new protocol. The addition of PQ3 follows iMessage's October 2023 enhancement featuring Contact Key Verification, designed to detect sophisticated attacks against Apple's iMessage servers while letting users verify they are messaging specifically with their intended recipients. IMessage with PQ3 is backed by mathematical validation from a team led by professor David Basin, head of the Information Security Group at ETH Zürich and co-inventor of Tamarin, a well-regarded security protocol verification tool. Basin and his research team at ETH Zürich used Tamarin to perform a technical evaluation of PQ3, published by Apple. Also evaluating PQ3 was University of Waterloo professor Douglas Stebila, known for his research on post-quantum security for Internet protocols. According to Apple's SEAR team, both research groups undertook divergent but complementary approaches, running different mathematical models to test the security of PQ3.


Is "Secure by Design" Failing?

The threat landscape around new Common Vulnerabilities and Exposures (CVEs) is one that every organization should take seriously. With a record-breaking 28,092 new CVEs published in 2023, bad actors are simply waiting to be handed easy footholds into their target organizations, and they don't have to wait long. Research from Qualys showed that three quarters of CVEs are exploited by attackers within just 19 days of their publication. And yet, organizations are failing to equip their DevOps teams with the secure coding skills and knowledge they need to eliminate vulnerabilities in the first place. Despite 47% of organizations blaming skills shortages for their vulnerability remediation failures, only 36% have their developers learn to write secure code. ... Firstly, developers need to understand the role they play in securing overall application development. This begins with writing more secure code, but this knowledge is also essential in code reviews. As developers write faster, or even leverage generative AI and open-source code to deliver quicker applications, being able to properly review and remediate insecure code becomes crucial.



Quote for the day:

"Great achievers are driven, not so much by the pursuit of success, but by the fear of failure." -- Larry Ellison

Daily Tech Digest - February 24, 2024

Business Continuity vs Disaster Recovery: 10 Key Differences

A key part of the BCP is identifying Recovery Strategies. These strategies outline how the business will continue critical operations after an incident. These strategies might involve alternative methods or locations for conducting business. The BCP also outlines the Incident Management Plan. It sets the roles, duties, and steps for managing an incident. This includes plans to talk to stakeholders and emergency services. The Development of Recovery Plans for key business areas such as IT systems, data, and customer service is also integral. These plans provide specific instructions for returning to normal operations after the disruption. ... A disaster recovery plan is intended to reduce data loss and downtime while facilitating the quick restoration of vital business operations following an unfavorable incident. The plan comprises actions to lessen the impact of a calamity so that the company may swiftly resume mission-critical operations or carry on with business as usual. A DRP typically includes an investigation of the demands for continuity and business processes. An organization often conducts a risk analysis (RA) and business impact analysis (BIA) to set recovery targets before creating a comprehensive strategy.


Test Outlines: A Novel Approach to Software Testing

The idea of Test Outlines is a re-imagination of the traditional approach present in test cases, and simply—a new one at that, introducing a narrative similar to that found in the cohesiveness and context of test scenarios. This combination of the methodologies is laying a base for the testing approach, which is visionary over its predecessors. The narrative structure of Test Outlines goes beyond the boundaries of all steps of a test case and instead draws these steps into a convincing storyline of a user journey through the software. This sets a narrative lens, not only for simplified, overall testing documentation but also for a holistic way that end-users will interact with the software in real settings. This depth allows for much more scope in understanding the testing process, moving it from a simple step checklist to a dynamic heuristic around the user experience. On the other hand, a narrative approach will inspire movement from isolated functionality with an interrelationship of the features. This builds up capability in identifying critical dependencies, potential integration issues, and system behavior in general during the user's interface.


Alarm Over GenAI Risk Fuels Security Spending in Middle East & Africa

Concerns over the business impact of generative AI is certainly not limited to the Middle East and Africa. Microsoft and OpenAI warned last week that the two companies had detected nation-state attackers from China, Iran, North Korea, and Russia using the companies' GenAI services to improve attacks by automating reconnaissance, answering queries about targeted systems, and improving the messages and lures used in social engineering attacks, among other tactics. And in the workplace, three-quarters of cybersecurity and IT professionals believe that GenAI is being used by workers, with or without authorization. The obvious security risks are not dampening enthusiasm for GenAI and LLMs. Nearly a third of organizations worldwide already have a pilot program in place to explore the use of GenAI in their business, with 22% already using the tools and 17% implementing them. "With a bit of upfront technical effort, this risk can be minimized by thinking through specific use cases for enabling access to generative AI applications while looking at the risk based on where data flows," Teresa Tung, cloud-first chief technologist at Accenture, stated in a 2023 analysis of the top generative AI threats.


What’s the difference between a software engineer and software developer?

One way to think of the main difference between software engineers and developers is the scope of their work. Software engineers tend to focus more on the larger picture of a project—working more closely with the infrastructure, security, and quality. Software developers, on the other hand, are more laser-focused on a specific coding task. In other words, software developers focus on ensuring software functionality whereas engineers ensure the software aligns with customer requirements, says Rostami. “One way to think about it: If you double your software developer team, you’ll double your code. But if you double your software engineering team, you’ll double the customer impact,” she tells Fortune. But it is also important to note that because of how often each title is used interchangeably, the exact differences between a software engineer and software developer role may differ slightly from company to company. Engineers may also have a greater grasp of the broader computer system ecosystems as well as have greater soft skills. ... When it comes to total pay, engineers bring home nearly $30,000 on average more, which could, in part, be due to project completion bonuses or other circumstances.


Simplified Data Management and Analytics Strategies for AI Environments

Leveraging automation tools such as Apache Airflow or Microsoft Power Automate offers significant advantages in streamlining and optimizing the entire data management lifecycle. These tools can play a crucial role in automating not only data collection, storage, and analysis but also in orchestrating complex workflows and data pipelines, thereby reducing manual intervention and accelerating data processing. For instance, these automation tools can be harnessed to schedule and automate the extraction of data from diverse sources, such as databases, APIs, and cloud services. By automating these processes, organizations can ensure timely and efficient data collection without the need for manual intervention, reducing the risk of human errors and enhancing the overall reliability of the data. Moreover, once the data is extracted, these automation tools can seamlessly transform the data into standardized formats, ensuring consistency and compatibility across different data sources. This standardized process not only simplifies the integration of heterogeneous data but also paves the way for efficient data analysis and reporting.


Low-code doesn’t mean low quality

Granted, no-code platforms make it easy to get the stack up and running to support back-office workflows, but what about supporting those outside the workflow? Does low-code offer the functionality and flexibility to support applications that fall outside the box? The truth is that low-code programming architectures are gaining popularity precisely because of their versatility. Rather than compromising on quality programming, low-code frees developers to make applications more creative and more productive. ... Modern low-code platforms include customization, configuration, and extensibility options. Every drag-and-drop widget is pretested to deliver flawless functionality and make it easier to build applications faster. However, those widgets also have multiple options to handle business logic in different ways at various events. Low-code widgets allow developers to focus on integration and functional testing rather than component testing. ... The productivity gains low-code gives developers come primarily from the ability to reuse abstractions at the component or module level; the ability to reuse code reduces the time needed to develop customized solutions. 


ConnectWise ScreenConnect attacks deliver malware

The vulnerabilities involves authentication bypass and path traversal issues within the server software itself, not the client software that is installed on the end-user devices. Attackers have found that they can deploy malware to servers or to workstations with the client software installed. Sophos has evidence that attacks against both servers and client machines are currently underway. Patching the server will not remove any malware or webshells attackers manage to deploy prior to patching and any compromised environments need to be investigated. Cloud-hosted implementations of ScreenConnect, including screenconnect.com and hostedrmm.com, received mitigations with hours of validation to address these vulnerabilities. Self-hosted (on-premise) instances remain at risk until they are manually upgraded, and it is our recommendation to patch to ScreenConnect version 23.9.8 immediately. ...  If you are no longer under maintenance, ConnectWise is allowing you to install version 22.4 at no additional cost, which will fix CVE-2024-1709, the critical vulnerability. However, this should be treated as an interim step. 


Microservices Modernization Missteps: Four Anti-Patterns of Rebuilding Apps

A common misstep when architecting legacy services to microservices is to make a functional, one to one replica of the legacy services. You simply look at what the existing services do, and you make sure the new bundle of microservices does that. The problem here is that your business has likely evolved its operations since the legacy services were made. That means that you likely don't need all the same functionality in the legacy services. And if you do need that functionality, you might need to do it differently, which is exactly the reason you are modernizing in the first place: The legacy services are no longer helping the business function as desired. Often, organizations will consider modernizing as purely technical work and exclude business stakeholders from the process. This means developers won't have enough input from business stakeholders when picking which parts of the legacy services to replicate, which to drop, and which to improve. In this situation, developers will just replicate the legacy services. When business stakeholders and users are not involved in microservice identification, you risk misalignment on new requirements and introducing new, potential problems or rework in the future.


Entering the Age of Explainable AI

Having access to good, clean data is always a crucial first step for businesses thinking about AI transformation because it ensures the accuracy of the predictions made by AI models. If the data being fed into the models is flawed or contains errors, the output will also be unreliable and is subject to bias. Investing in a self-service data analytics platform that includes sophisticated data cleansing and prep tools, along with data governance, provides business users with the trust and confidence they need to move forward with their AI initiatives. These tools also help with accountability and -- consequently -- data quality. When a code-based model is created, it can take time to track who made changes and why, leading to problems later when someone else needs to take over the project or when there is a bug in the code. ... Equally important to the technology is ensuring that data analytics methodologies are both accessible and scalable, which can be accomplished through training. Data scientists are hard to come by and you need people who understand the business problems, whether or not they can code. No-code/low-code data analytics platforms make it possible for people with limited programming experience to build and deploy data science models. 


End-To-End Test Automation for Boosting Software Effectiveness

To check the entire application flow, QA automation engineers must implement robust automated scripts based on test cases that follow real-life user scenarios. It’s vital to make sure the scripts are maintainable and can be easily understood by every team member. It’s also important to pay special attention to tests that verify UI to prevent flakiness, i.e., tests that either fail or not when being run under the same conditions and without any code changes. This may happen because of the complicated nature of tests or some outer conditions, such as problems with the network. ... To expedite software testing activities and obtain valuable feedback faster, it's good practice to run several automated scripts at the same time on diverse equipment or environments. While doing so, companies can either use cloud infrastructure, such as virtual machines, or use on-premises ones, depending on the client’s technical ecosystem. In addition, in the case of the former option, QA automation engineers can ramp up cloud infrastructure to support important releases, which allows more tests to run at the same time and avoids long-term investment in local infrastructure.



Quote for the day:

"Effective Leaders know that resources are never the problem; it's always a matter of resourcefulness." -- Tony Robbins