Daily Tech Digest - February 26, 2024

From deepfakes to digital candidates: AI’s political play

Deepfake technology uses AI to create or manipulate still images, video and audio content, making it possible to convincingly swap faces, synthesize speech, fabricate or alter actions in videos. This technology mixes and edits data from real images and videos to produce realistic-looking and-sounding creations that are increasingly difficult to distinguish from authentic content. While there are legitimate educational and entertainment uses for these technologies, they are increasingly being used for less sanguine purposes. Worries abound about the potential of AI-generated deepfakes that impersonate known figures to manipulate public opinion and potentially alter elections. ... Techniques like those used in deepfake technology produce highly realistic and interactive digital representations of fictional or real-life characters. These developments make it technologically possible to simulate conversations with historical figures or create realistic digital personas based on their public records, speeches and writings. One possible new application is that someone (or some group), will put forward an AI-created digital persona for public office. 


How data governance must evolve to meet the generative AI challenge

“With generative AI bringing more data complexity, organizations must have good data governance and privacy policies in place to manage and secure the content used to train these models,” says Kris Lahiri, co-founder and chief security officer of Egnyte. “Organizations must pay extra attention to what data is used with these AI tools, whether third parties like OpenAI, PaLM, or an internal LLM that the company may use in-house.” Review genAI policies around privacy, data protection, and acceptable use. Many organizations require submitting requests and approvals from data owners before using data sets for genAI use cases. Consult with risk, compliance, and legal functions before using data sets that must meet GDPR, CCPA, PCI, HIPAA, or other data compliance standards. Data policies must also consider the data supply chain and responsibilities when working with third-party data sources. “Should a security incident occur involving data that is protected within a certain region, vendors need to be clear on both theirs and their customers’ responsibilities to properly mitigate it, especially if this data is meant to be used in AI/ML platforms” says Jozef de Vries, chief product engineering officer of EDB.


Will AI Replace Consultants? Here’s What Business Owners Say.

“Most consultants aren’t actually that smart," said Michael Greenberg of Modern Industrialists. “They’re just smarter than the average person.” But he reckons the average machine is much smarter. “Consultants generally do non-creative tasks based around systematic analysis, which is yet another thing machines are normally better at than humans.” Greenberg believes some consultants, “doing design or user experience, will survive,” but “the run of the mill accounting degree turned business advisor will not.” Someone who has “replaced all of [her] consultants with ChatGPT already, and experienced faster growth,” is Isabella Bedoya, founder of MarketingPros.ai. However, she thinks because “most people don't know how to use AI, savvy consultants need to leverage it to become even more powerful, effective and efficient for their clients” and stay ahead of their game. Heather Murray, director at Beesting Digital, thinks the inevitable replacement of consultants is down to quality. “There are so many poor quality consultants that rely rigidly on working their clients through set frameworks, regardless of the individual’s needs. AI could do that easily.” 


Effective Code Documentation for Data Science Projects

The first step to effective code documentation is ensuring it’s clear and concise. Remember, the goal here is to make your code understandable to others – and that doesn’t just mean other data scientists or developers. Non-technical stakeholders, project managers, and even clients may need to understand what your code does and why it works the way it does. To achieve this, you should aim to use plain language whenever possible. Avoid jargon and overly complex sentences. Instead, focus on explaining what each part of your code does, why you made the choices you did, and what the expected outcomes are. If there are any assumptions, dependencies, or prerequisites for your code, these should be clearly stated. Remember, brevity is just as important as clarity. ... Data science projects are often dynamic, with models and data evolving over time. This means that your code documentation needs to be equally dynamic. Keeping your documentation up to date is critical to ensuring its usefulness and accuracy. A good practice here is to treat your documentation as part of your code, updating it as you modify or add to your code base.


Breaking down the language barrier: How to master the art of communication

Exactly how can cyber professionals go about improving their communication skills? According to Shapely, many people prefer to take short online learning courses. On-the-job coaching or mentorships are other popular upskilling strategies, providing quick and cost-effective practical learning opportunities. For those still early in their cybersecurity career, there is the option of building communication skills as part of a university degree. According to Kudrati, who teaches part-time at La Trobe University, many cybersecurity students must complete one subject on professional skills as part of their course. “This helps train students’ presentation skills, requiring them to present in front of lecturers and classmates as if they’re customers or business teams,” he says. Homing in on communication skills at university or early on in a cybersecurity professional’s career is also encouraged by Pearlson. In a study she conducted into the skills of cybersecurity professionals, she found that while communication skills were in demand, they were lacking, particularly among those in entry roles. 


4 core AI principles that fuel transformation success

Around 86% of software development companies are agile, and with good reason. Adopting an agile mindset and methodologies could give you an edge on your competitors, with companies that do seeing an average 60% growth in revenue and profit as a result. Our research has shown that agile companies are 43% more likely to succeed in their digital projects. One reason implementing agile makes such a difference is the ability to fail fast. The agile mindset allows teams to push through setbacks and see failures as opportunities to learn, rather than reasons to stop. Agile teams have a resilience that’s critical to success when trying to build and implement AI solutions to problems. Leaders who display this kind of perseverance are four times more likely to deliver their intended outcomes. Developing the determination to regroup and push ahead within leadership teams is considerably easier if they’re perceived as authentic in their commitment to embed AI into the company. Leaders can begin to eliminate roadblocks by listening to their teams and supporting them when issues or fears arise. That means proactively adapting when changes occur, whether this involves more delegation, bringing in external support, or reprioritizing resources.


Don’t Get Left Behind: How to Adopt Data-Driven Principles

Culture change remains the biggest hurdle to data-driven transformation. The disruption inherent in this evolution can put off some key stakeholders, but a few common-sense steps can guide your organization to tackle it successfully. Read the room - Executive buy-in is crucial to building a data-driven culture. Leadership must get behind the move so the rank-and-file will dedicate the time and effort needed to make the pivot. Map the landscape - You can’t change what you don’t know. Start by assessing the state of the organization: find the gaps in the existing data infrastructure and forecast any future analytics needs so you can plan for them. Evaluate your options - Building business intelligence (BI) and artificial intelligence (AI) systems from scratch is labor- and resource-intensive. ... However, there’s no need to reinvent the wheel; consider leveraging managed services to deal with scale and adaptation issues and ask for guidance from your provider’s data architects and scientists. Think single-source - Fragmentation detracts from the usefulness of data and can mask insights that would be available with better visibility. Implement integrated platforms that provide secure and scalable data pipelines, storage, and insights from end to end.


It’s time for security operations to ditch Excel

Microsoft Excel and Google Sheets are excellent for balancing books and managing cybersecurity budgets. However, they’re less ideal for tackling actual security issues, auditing, tracking, patching, and mapping asset inventories. Surely, our crown jewels deserve better. And yet, security operation teams are drowning in multi-tab tomes that require constant manual upkeep. Using these spreadsheets requires security operations to chase down every team in their organization for input on everything from the mapping of exceptions and end-of-life of machines to tracking hardware and operating systems. This is the only way to gather the information required on when, why and how certain security issues or tasks must be addressed. It’s no wonder, then, that the column reserved for due dates is usually mostly red. This is an industry-wide problem plaguing even multinational enterprises with top CISOs. Even those large enough to have GRC teams still use Excel for upcoming audits to verify remediations, delegate responsibilities and keep track of compliance certifications.


How Leadership Missteps Can Derail Your Cloud Strategy

Cloud computing involves many moving parts working in unison; therefore, leadership must be clear and concise regarding their cloud strategies. Yet often they are not. The problems arise from not acknowledging the complexity inherent in moving to the cloud. It's not a simple plug-and-play transition, but one that requires modifications not only to technology but also to business processes and organizational culture. For these reasons, the scope of the project is easily underestimated. Underestimating the complexity of transitioning to cloud computing can lead to significant pitfalls. Inadequate staff training, lax security measures, and rushed vendor choices together are just the tip of the iceberg. These oversights, seemingly minor at first, can snowball into significant issues down the line. But there's another layer: the iceberg beneath the surface. Focusing merely on the initial outlay while overlooking ongoing operational costs is like ignoring the currents below, both can unexpectedly steer your budget -- and your company -- off course. Acknowledging and managing operational expenses is vital for a thorough and financially stable cloud computing strategy.


The Art of Ethical Hacking: Securing Systems in the Digital Age

Stressing the obvious differences between malicious hacking and ethical hacking is vital. Even though the strategies utilized could be comparative, ethical hacking is carried out with permission and aims to strengthen security. On the other hand, malicious hacking entails unlawful admittance to steal, disrupt, or manipulate data without authorization. Operating within moral and legal bounds, ethical hackers make sure that their acts advance cybersecurity measures as a whole. Ethical hacking is the term used to describe a legitimate attempt to obtain unauthorized access to a computer system, program, or information. Ethical hacking includes imitating the methods and actions of vicious attackers. By using this method, security vulnerabilities can be found and fixed before a malicious attack can make use of them. ... As everybody and organizations keep on depending on technology for everyday tasks and business operations, the role of ethical hacking in strengthening cybersecurity will only become more crucial. A safe digital environment can be the difference between one that is susceptible to potentially catastrophic cyberattacks and one that embraces ethical hacking as a proactive strategy. 



Quote for the day:

"Things work out best for those who make the best of how things work out." -- John Wooden

Daily Tech Digest - February 25, 2024

Orgs Face Major SEC Penalties for Failing to Disclose Breaches

"It's a company issue, definitely not just CISO issue. Everybody will be very leery about vetting statements — why should I say this? — without having legal give it their blessing ... because they are so worried about having charges against them for making a statement." The worries will add up to additional costs for businesses. Because of the additional liability, companies will have to have more comprehensive Directors and Officers (D&O) liability insurance that not only covers the legal expenses for a CISO to defend themselves, but also for their expenses during an investigation. Businesses who will not pay to support and protect their CISO may find themselves unable to hire for the position, while conversely, CISOs may have trouble finding supportive companies, says Josh Salmanson, senior vice president of technology solutions at Telos Corp., a cyber risk management firm. "We're going to see less people wanting to be CISOs, or people demanding much higher salaries because they think it may be a very short-term role until they 'get busted' publicly," he says. "The number of people that will have a really ideal environment with support from the company and the funding that they need will likely remain small."


Risk Management Strategies for Tech Startups

As you continue to grow, your risk management strategies will shift. One of the best things you can do as your startup gains traction is to develop a contingency plan. A contingency plan can keep things afloat if you run into an unexpected loss of customers, funding problems, or even a data disaster. Your contingency plan should include, first and foremost, strong cybersecurity practices. Cyberattacks happen with even the largest and most successful conglomerates. While you might not be able to completely stop cyber criminals from getting in, prioritizing protective measures and developing a response plan will make it easier for your business to bounce back if an attack happens. Things like using cloud-based backups, developing strong passwords and authentication practices, and educating your employees on how to keep themselves safe are all great ways to protect your business from hackers. A successful contingency plan should also cover unexpected accidents and incidents. If someone gets injured on the job or your company gets sued, a strong insurance plan needs to be in place to cover legal fees and damages. 


The Architect’s Contract

The architect is a business technology strategist. They provide their clients with ways to augment business with technology strategy in both localized and universal scales. They make decisions which augment the value output of a business model (or a mission model) by describing technology solutions which can fundamentally alter the business model. Some architects specialize in one or more areas of that. But the general data indicated that even pure business architects are called on to rely on their technical skills quite often, and the most technical software architects must have numerous business skills to be successful. ... Governance is not why architects get into the job. The ones that do are generally architect managers not competent architects themselves. All competent architects started out by making things. Proactive, innovation based teams create new architects constantly. Moving up to too high a level of scope makes it very hard to stay a practicing architect. It takes radical dedication to learning to be a real chief architect. Scope is one of the biggest challenges of our field as it is based on the concept of scarcity. Like having city planners ‘design’ homes or skyscrapers or cathedrals. 


Why DevOps is Key to Software Supply Chain Security

Organizations must also evaluate how well existing processes work to protect the business, then strategically add/subtract from there as needed. No matter what solutions are leveraged, more and different tools generate reams of more and different data. What’s important — and to whom? How do I manage the data? When can I trust it? Where do I store it? What problems does the new data help me solve? Organizations will need a way to effectively sift this information and deliver the right data to the right teams at the right time. To preserve the ability to quickly and continuously innovate, it will be important to focus on shifting security left as well as integrating automation whenever and wherever possible. As new security metadata becomes available, such as from SBOMs, new solutions for managing that metadata will be key. An open source initiative sponsored by Google, GUAC is designed to integrate software security information, including SBOMs, attestations and vulnerability data. Users can query the resulting GUAC graph to help answer key security concerns, including proactive, preventive and reactive concerns.


The Future of Computing: Harnessing Molecules for Sustainable Data Management

Molecular computing harnesses the natural propensity of molecules to form complex, stable structures, allowing for parallel processing – an important advantage that enables computational tasks to be performed simultaneously, a feat that current supercomputers can only dream of. Enzymes like polymerases can simultaneously replicate millions of DNA strands, each acting as a separate computing pathway. This capability translates to potential parallel processing operations in the order of 1015, dwarfing the 1010 operations per second of the fastest supercomputers. Energy efficiency is another game-changer. The energy profile of molecular computing is notably low. DNA replication in a test tube requires minimal energy, estimated at less than a millionth of a joule per operation, compared to the approximately 10-4 joules consumed by a typical transistor operation. This translates to a potential reduction in energy consumption by a factor of 105 or more, depending on the operation. To prove our point, training models like GPT-4 require tens of millions of kilowatt-hours; molecular computing could achieve similar results in a fraction of the time and with exponentially less energy.


Role of AI in Data Management Evolution – Interview with Rakesh Singh

Embracing AI-based solutions presents a challenge to organizations centered around governance and maintaining a firm grip on the overall processes. This challenge is particularly present in the financial sector, where maintaining control is not only a preference but a crucial necessity. Therefore, in tandem with the adoption of AI-driven solutions, a concerted emphasis must be placed on ensuring robust governance measures. For financial institutions, the imperative extends beyond the mere integration of AI; it encompasses a holistic commitment to upholding data security, enforcing comprehensive policies, safeguarding privacy, and adhering to stringent compliance standards. Recognizing that the implementation of AI introduces complexities and potential vulnerabilities, it becomes imperative to establish a framework that not only facilitates the effective utilization of AI but also fortifies the organization against risks. In essence, the successful adoption of AI in the financial domain necessitates a dual focus – one on leveraging the transformative potential of AI solutions and the other on erecting a resilient governance structure.


Ransomware Operation LockBit Reestablishes Dark Web Leak Site

Law enforcement agencies behind the takedown, acting under the banner of "Operation Cronos," suggested they would reveal on Friday the identity of LockBit leader LockBitSupp - but did not. "We know who he is. We know where he lives. We know how much he is worth. LockBitSupp has engaged with Law Enforcement :)," authorities instead wrote on the seized leak site. "LockBit has been seriously damaged by this takedown and his air of invincibility has been permanently pierced. Every move he has taken since the takedown is one of someone posturing, not of someone actually in control of the situation," said Allan Liska, principal intelligence analyst, Recorded Future. The re-established leak site includes victim entries apparently made just before Operation Cronos executed the takedown, including one for Fulton County, Ga. LockBit previously claimed responsibility for a January attack that disrupted the county court and tax systems. County District Attorney Fani Willis is pursing a case against former President Donald Trump and 18 co-defendants for allegedly attempting to stop the transition of presidential power in 2020.


Toward Better Patching — A New Approach with a Dose of AI

By default, the NIST operated National Vulnerability Database (NVD) is the source of truth for CVSS scores. But NVD gets its entries from the CVE database, and if there is no completed CVE entry, there is no NVD entry — and therefore no immediately trusted and verifiable CVSS score. Despite this, security teams use whatever CVSS they are told as a primary factor in their vulnerability patch triaging — the higher the score, the greater the perceived likelihood of exploitation with a greater potential for harm – and it is likely to be a score applied by the vulnerability researcher. There is an inevitable delay and confusion (due to ‘responsible disclosure’, possible delays in posting to the CVE database, and an element of subjectivity in the CVSS score). “The delay in CVE scoring often means that defenders face two uphill battles regarding vulnerability management. First, they need a prioritization method to determine which of the thousands of CVEs published each month they should patch,” notes Coalition. “Second, they must patch these CVEs before a threat actor leverages them to target their organization.”


Apple Beefs Up iMessage With Quantum-Resistant Encryption

"To our knowledge, PQ3 has the strongest security properties of any at-scale messaging protocol in the world," Apple's SEAR team explained in a blog post announcing the new protocol. The addition of PQ3 follows iMessage's October 2023 enhancement featuring Contact Key Verification, designed to detect sophisticated attacks against Apple's iMessage servers while letting users verify they are messaging specifically with their intended recipients. IMessage with PQ3 is backed by mathematical validation from a team led by professor David Basin, head of the Information Security Group at ETH Zürich and co-inventor of Tamarin, a well-regarded security protocol verification tool. Basin and his research team at ETH Zürich used Tamarin to perform a technical evaluation of PQ3, published by Apple. Also evaluating PQ3 was University of Waterloo professor Douglas Stebila, known for his research on post-quantum security for Internet protocols. According to Apple's SEAR team, both research groups undertook divergent but complementary approaches, running different mathematical models to test the security of PQ3.


Is "Secure by Design" Failing?

The threat landscape around new Common Vulnerabilities and Exposures (CVEs) is one that every organization should take seriously. With a record-breaking 28,092 new CVEs published in 2023, bad actors are simply waiting to be handed easy footholds into their target organizations, and they don't have to wait long. Research from Qualys showed that three quarters of CVEs are exploited by attackers within just 19 days of their publication. And yet, organizations are failing to equip their DevOps teams with the secure coding skills and knowledge they need to eliminate vulnerabilities in the first place. Despite 47% of organizations blaming skills shortages for their vulnerability remediation failures, only 36% have their developers learn to write secure code. ... Firstly, developers need to understand the role they play in securing overall application development. This begins with writing more secure code, but this knowledge is also essential in code reviews. As developers write faster, or even leverage generative AI and open-source code to deliver quicker applications, being able to properly review and remediate insecure code becomes crucial.



Quote for the day:

"Great achievers are driven, not so much by the pursuit of success, but by the fear of failure." -- Larry Ellison

Daily Tech Digest - February 24, 2024

Business Continuity vs Disaster Recovery: 10 Key Differences

A key part of the BCP is identifying Recovery Strategies. These strategies outline how the business will continue critical operations after an incident. These strategies might involve alternative methods or locations for conducting business. The BCP also outlines the Incident Management Plan. It sets the roles, duties, and steps for managing an incident. This includes plans to talk to stakeholders and emergency services. The Development of Recovery Plans for key business areas such as IT systems, data, and customer service is also integral. These plans provide specific instructions for returning to normal operations after the disruption. ... A disaster recovery plan is intended to reduce data loss and downtime while facilitating the quick restoration of vital business operations following an unfavorable incident. The plan comprises actions to lessen the impact of a calamity so that the company may swiftly resume mission-critical operations or carry on with business as usual. A DRP typically includes an investigation of the demands for continuity and business processes. An organization often conducts a risk analysis (RA) and business impact analysis (BIA) to set recovery targets before creating a comprehensive strategy.


Test Outlines: A Novel Approach to Software Testing

The idea of Test Outlines is a re-imagination of the traditional approach present in test cases, and simply—a new one at that, introducing a narrative similar to that found in the cohesiveness and context of test scenarios. This combination of the methodologies is laying a base for the testing approach, which is visionary over its predecessors. The narrative structure of Test Outlines goes beyond the boundaries of all steps of a test case and instead draws these steps into a convincing storyline of a user journey through the software. This sets a narrative lens, not only for simplified, overall testing documentation but also for a holistic way that end-users will interact with the software in real settings. This depth allows for much more scope in understanding the testing process, moving it from a simple step checklist to a dynamic heuristic around the user experience. On the other hand, a narrative approach will inspire movement from isolated functionality with an interrelationship of the features. This builds up capability in identifying critical dependencies, potential integration issues, and system behavior in general during the user's interface.


Alarm Over GenAI Risk Fuels Security Spending in Middle East & Africa

Concerns over the business impact of generative AI is certainly not limited to the Middle East and Africa. Microsoft and OpenAI warned last week that the two companies had detected nation-state attackers from China, Iran, North Korea, and Russia using the companies' GenAI services to improve attacks by automating reconnaissance, answering queries about targeted systems, and improving the messages and lures used in social engineering attacks, among other tactics. And in the workplace, three-quarters of cybersecurity and IT professionals believe that GenAI is being used by workers, with or without authorization. The obvious security risks are not dampening enthusiasm for GenAI and LLMs. Nearly a third of organizations worldwide already have a pilot program in place to explore the use of GenAI in their business, with 22% already using the tools and 17% implementing them. "With a bit of upfront technical effort, this risk can be minimized by thinking through specific use cases for enabling access to generative AI applications while looking at the risk based on where data flows," Teresa Tung, cloud-first chief technologist at Accenture, stated in a 2023 analysis of the top generative AI threats.


What’s the difference between a software engineer and software developer?

One way to think of the main difference between software engineers and developers is the scope of their work. Software engineers tend to focus more on the larger picture of a project—working more closely with the infrastructure, security, and quality. Software developers, on the other hand, are more laser-focused on a specific coding task. In other words, software developers focus on ensuring software functionality whereas engineers ensure the software aligns with customer requirements, says Rostami. “One way to think about it: If you double your software developer team, you’ll double your code. But if you double your software engineering team, you’ll double the customer impact,” she tells Fortune. But it is also important to note that because of how often each title is used interchangeably, the exact differences between a software engineer and software developer role may differ slightly from company to company. Engineers may also have a greater grasp of the broader computer system ecosystems as well as have greater soft skills. ... When it comes to total pay, engineers bring home nearly $30,000 on average more, which could, in part, be due to project completion bonuses or other circumstances.


Simplified Data Management and Analytics Strategies for AI Environments

Leveraging automation tools such as Apache Airflow or Microsoft Power Automate offers significant advantages in streamlining and optimizing the entire data management lifecycle. These tools can play a crucial role in automating not only data collection, storage, and analysis but also in orchestrating complex workflows and data pipelines, thereby reducing manual intervention and accelerating data processing. For instance, these automation tools can be harnessed to schedule and automate the extraction of data from diverse sources, such as databases, APIs, and cloud services. By automating these processes, organizations can ensure timely and efficient data collection without the need for manual intervention, reducing the risk of human errors and enhancing the overall reliability of the data. Moreover, once the data is extracted, these automation tools can seamlessly transform the data into standardized formats, ensuring consistency and compatibility across different data sources. This standardized process not only simplifies the integration of heterogeneous data but also paves the way for efficient data analysis and reporting.


Low-code doesn’t mean low quality

Granted, no-code platforms make it easy to get the stack up and running to support back-office workflows, but what about supporting those outside the workflow? Does low-code offer the functionality and flexibility to support applications that fall outside the box? The truth is that low-code programming architectures are gaining popularity precisely because of their versatility. Rather than compromising on quality programming, low-code frees developers to make applications more creative and more productive. ... Modern low-code platforms include customization, configuration, and extensibility options. Every drag-and-drop widget is pretested to deliver flawless functionality and make it easier to build applications faster. However, those widgets also have multiple options to handle business logic in different ways at various events. Low-code widgets allow developers to focus on integration and functional testing rather than component testing. ... The productivity gains low-code gives developers come primarily from the ability to reuse abstractions at the component or module level; the ability to reuse code reduces the time needed to develop customized solutions. 


ConnectWise ScreenConnect attacks deliver malware

The vulnerabilities involves authentication bypass and path traversal issues within the server software itself, not the client software that is installed on the end-user devices. Attackers have found that they can deploy malware to servers or to workstations with the client software installed. Sophos has evidence that attacks against both servers and client machines are currently underway. Patching the server will not remove any malware or webshells attackers manage to deploy prior to patching and any compromised environments need to be investigated. Cloud-hosted implementations of ScreenConnect, including screenconnect.com and hostedrmm.com, received mitigations with hours of validation to address these vulnerabilities. Self-hosted (on-premise) instances remain at risk until they are manually upgraded, and it is our recommendation to patch to ScreenConnect version 23.9.8 immediately. ...  If you are no longer under maintenance, ConnectWise is allowing you to install version 22.4 at no additional cost, which will fix CVE-2024-1709, the critical vulnerability. However, this should be treated as an interim step. 


Microservices Modernization Missteps: Four Anti-Patterns of Rebuilding Apps

A common misstep when architecting legacy services to microservices is to make a functional, one to one replica of the legacy services. You simply look at what the existing services do, and you make sure the new bundle of microservices does that. The problem here is that your business has likely evolved its operations since the legacy services were made. That means that you likely don't need all the same functionality in the legacy services. And if you do need that functionality, you might need to do it differently, which is exactly the reason you are modernizing in the first place: The legacy services are no longer helping the business function as desired. Often, organizations will consider modernizing as purely technical work and exclude business stakeholders from the process. This means developers won't have enough input from business stakeholders when picking which parts of the legacy services to replicate, which to drop, and which to improve. In this situation, developers will just replicate the legacy services. When business stakeholders and users are not involved in microservice identification, you risk misalignment on new requirements and introducing new, potential problems or rework in the future.


Entering the Age of Explainable AI

Having access to good, clean data is always a crucial first step for businesses thinking about AI transformation because it ensures the accuracy of the predictions made by AI models. If the data being fed into the models is flawed or contains errors, the output will also be unreliable and is subject to bias. Investing in a self-service data analytics platform that includes sophisticated data cleansing and prep tools, along with data governance, provides business users with the trust and confidence they need to move forward with their AI initiatives. These tools also help with accountability and -- consequently -- data quality. When a code-based model is created, it can take time to track who made changes and why, leading to problems later when someone else needs to take over the project or when there is a bug in the code. ... Equally important to the technology is ensuring that data analytics methodologies are both accessible and scalable, which can be accomplished through training. Data scientists are hard to come by and you need people who understand the business problems, whether or not they can code. No-code/low-code data analytics platforms make it possible for people with limited programming experience to build and deploy data science models. 


End-To-End Test Automation for Boosting Software Effectiveness

To check the entire application flow, QA automation engineers must implement robust automated scripts based on test cases that follow real-life user scenarios. It’s vital to make sure the scripts are maintainable and can be easily understood by every team member. It’s also important to pay special attention to tests that verify UI to prevent flakiness, i.e., tests that either fail or not when being run under the same conditions and without any code changes. This may happen because of the complicated nature of tests or some outer conditions, such as problems with the network. ... To expedite software testing activities and obtain valuable feedback faster, it's good practice to run several automated scripts at the same time on diverse equipment or environments. While doing so, companies can either use cloud infrastructure, such as virtual machines, or use on-premises ones, depending on the client’s technical ecosystem. In addition, in the case of the former option, QA automation engineers can ramp up cloud infrastructure to support important releases, which allows more tests to run at the same time and avoids long-term investment in local infrastructure.



Quote for the day:

"Effective Leaders know that resources are never the problem; it's always a matter of resourcefulness." -- Tony Robbins

Daily Tech Digest - February 23, 2024

When cloud AI lands you in court

In a recent legal ruling against Air Canada in a small claims court, the airline lost because its AI-powered chatbot provided incorrect information about bereavement fares. The chatbot suggested that the passenger could retroactively apply for bereavement fares, despite the airline’s bereavement fares policy contradicting this information. ... In the Air Canada case, the tribunal called it a case of “negligent misrepresentation,” meaning that the airline had failed to take reasonable care to ensure the accuracy of its chatbot. The ruling has significant implications, raising questions about company liability for the performance of AI-powered systems, which, in case you live under a rock, are coming fast and furious. Also, this incident highlights the vulnerability of AI tools to inaccuracies. This is most often caused by the ingestion of training data that has erroneous or biased information. This can lead to adverse outcomes for customers, who are pretty good at spotting these issues and letting the company know. The case highlights the need for companies to reconsider the extent of AI’s capabilities and their potential legal and financial exposure to misinformation, which will cause bad decisions and outcomes from the AI systems.


Rackspace’s MD on addressing the shortage of senior, mid-level cybersecurity talent

The Data Security Council of India (DSCI) predicts that local demand for cybersecurity professionals will reach a million positions in 2025 if the cybersecurity ecosystem continues its rapid growth. While both the government and private enterprises are taking steps to increase the number of individuals pursuing careers in cybersecurity, its impact will not be felt immediately, especially at the higher levels. As experienced professionals retire or move into more advanced roles, the industry may face a shortage of individuals with the necessary expertise and experience to fill their positions. While the increase in new graduates entering the field can fill up entry-level roles, it will take more time for them to gain the necessary experience and qualifications for senior and mid-level cybersecurity positions. Organisations will need to be innovative and creative in ensuring their cybersecurity posture in the face of a talent crunch. They will need to utilise and refine their strategies for attracting and retaining top talent, as well as upskilling existing employees, by leveraging the latest technological trends for more efficient cybersecurity practices. 


What are the main challenges CISOs are facing in the Middle East?

The skills challenge is likely going to be key as a result of the rise of disruptive technologies such as Generative AI. They will be a reshaping of the entire global workforce and skills to adequately deal with cybersecurity issues will be in short supply. The other critical challenge that will be faced has to do with regulatory changes as nation-states seek to protect their citizens from cyberattacks. This typically adds to the overall costs of cyber compliance. Lastly, cybercrime will also rise especially on digital platforms as people transact virtually. Cybersecurity Ventures expects damage costs from cybercrime to increase by about 15% each year over the next 3 years. ... The human resource base is very key both for cybersecurity professionals and the general employee. In cybersecurity, precedence is always provided for the protection of human life before anything else. It is therefore important to ensure that people are equipped with adequate and relevant knowledge about how to identify indicators of attacks and remain alert for such attacks ... The financial services sector also relies on proprietary technology hence any cyber-attacks on such could lead to huge losses and reputational damage. The sector also holds customer data and intellectual property which is typically very sensitive information and held on trust.


Practical steps on carbon accounting for data centers

Measuring the carbon and material cost of our equipment is done through lifecycle assessment (LCA). This is done by disassembling products, looking at the material content, and giving each part of this an environmental weight. This is based on where and how they were sourced and what impacts these processes have. Measuring impact using the LCA method involves drawing boundaries, making assumptions, and using estimates. These estimates are shared on platforms like EcoInvent, which give specialists shortcuts on materials and good ideas on how to fill gaps. When you read reports from manufacturers, they will state where they assume the product was delivered, where it was assembled, how long it was in use, where the materials were mined, and potentially how and where it was destroyed. They need to do this because different locations will have slightly different sets of environmental risks. There are a lot of variables in play. Because of this, there is wide variance between LCAs from different manufacturers of very similar products.


Incorporating AI and automation into cyber risk management

AI-powered systems can significantly enhance organisational cyber defence capabilities through advanced threat detection, predictive analytics, and real-time monitoring. Next-generation AI-driven tools enable organisations to establish intelligent, secure, and automated systems capable of real-time threat detection, prevention, and prediction. AI models can be trained to identify anomalies in system behaviour, serving as an effective means of detecting potential cyber risks. This capability proves invaluable in recognizing potential security breaches or operational failures. Moreover, AI-powered threat intelligence contributes to identifying emerging threats, facilitating the development of proactive mitigation strategies. Ensuring compliance with IT regulations, such as the General Data Protection Regulation (GDPR) and Payment Card Industry Data Security Standard (PCI DSS), is achieved through the continuous monitoring capabilities of AI tools. These tools not only streamline compliance efforts but also enhance accuracy and efficiency. 


Adapting To Software Testing's Future: Success Factors

Risk-based testing is a strategic approach that prioritizes testing efforts based on the potential risk of failure and its impact on the project or business. By identifying the most critical areas of the application in terms of functionality, user impact, and likelihood of failure, teams can allocate their limited testing resources more effectively. ... Test selection techniques, such as test case prioritization and minimization, help teams focus on the tests that are most likely to detect defects. Prioritization involves ordering test cases so that those with the highest importance or likelihood of finding bugs are executed first. Minimization seeks to reduce the number of test cases to a necessary subset, eliminating redundancies without sacrificing coverage. ... By automating repetitive and time-consuming tests, teams can significantly reduce the time required for test execution. Automation is particularly effective for regression testing, where the same tests need to be run repeatedly against successive versions of the software. Automated tests can be executed faster and more frequently than manual tests, providing quicker feedback and freeing up human testers to focus on more complex and exploratory testing tasks.


5 Tips for Developer-Friendly DevSecOps

Many security tools are built for security professionals, so simply bolting them onto existing developer workflows can create friction. When looking to integrate a new tool into the SDLC, consider extracting the desired data from the security tool and natively integrating it into the developer’s workflow — or even better, look to a tool that’s already embedded within the flow. This reduces context switching, and helps developers detect and remediate vulnerabilities earlier. Additionally, leveraging AI tools within integrated development environments (IDEs) streamlines the process further, allowing developers to address security alerts without leaving their coding environment. ... A barrage of alerts, especially false positives, can erode a developer’s trust in the tool and compromise their productivity. A well-integrated security tool should have an alert system that surfaces high-priority alerts directly to developers — for example, alert settings based on custom and automated triage rules, filterable code scanning alerts and the ability to dismiss alerts contribute to a more effective alert system. This ensures developers can swiftly address urgent security concerns without being overwhelmed by unnecessary noise, and helps to ultimately clean up an organization’s security debt.


Leveraging automation for enhanced cyber security operations

A practical approach to refining automation logic involves leveraging experiences from cyber exercises, penetration tests or red teaming. Analyzing the defensive strategies of the “blue team” during various attack scenarios helps identify their response algorithms and steps. This process starts with differentiating between true and false positive alerts, identifying hacker attributes and evaluating compromised resources. Such insights enable the automation of defenses by validating logged events, ensuring a more effective and streamlined response to modern cyber threats. The first step in enhancing incident response is to automate the collection of contextual data that informs decision-making. This includes information about the particular machine or another asset involved in the security incident, user account details and intelligence on external threat elements like domain names. This foundational data is important for understanding the scope and impact of security incidents, enabling quicker and more effective responses. If an attack still evolves, the context gathered initially assists in correlating future defensive measures with a pre-established hypothesis regarding the attack’s propagation.


Innovation in IT: A Blueprint for Digital Evolution

Success requires a methodical approach. Digital Business Methodology (DBM) provides insight into the "What" that shapes your approach, with the "How" contingent on tools, ecosystem, leadership support, and team skill set. DBM is a comprehensive strategy that empowers companies to embrace and implement digital business practices. It provides a well-defined path orchestrating data, technology, and personnel alignment. This approach yields results across the enterprise, emphasizing speed, consistency, and scalability through an outcome-driven, incremental process. This methodology's core is a business-led, agile digital culture focused on achieving bite-sized outcomes essential for accelerating business growth. Under the DBM umbrella, businesses lead in collaboration with key stakeholders throughout the entire process, from ideation to deployment. The primary focus lies in simplifying end-to-end workflows and establishing a single source of truth (SSOT). This guided and adaptable ideation-to-deployment ecosystem facilitates seamless collaboration among business owners, engineers, analysts, scientists, and operational teams, driving innovative solutions and achieving desired outcomes.


The Psychology of Cybersecurity Burnout

The cybersecurity landscape is incredibly complex, and the cybersecurity procedures implemented by a given organization are likely to vary significantly. However, a number of factors have emerged as being likely contributors to this mental health phenomenon. ... Anticipating developing threats is a further problem. Staff simply don’t have time to stay on top of the news and devise procedures that can deal with novel ransomware attacks or whatever else may be brewing in the attack space. “If I don’t get on top of this, it’s gonna be a problem for me and my team,” Gartland says. “So, we’re just trying to figure out: How do I learn something on the weekend or late at night?” Cybersecurity professionals must be highly attentive to their work and conspicuous failures can often be traced to a single error, increasing the burden of responsibility on even low-level employees. The vigilance required of the job is equivalent to that required of air traffic controllers and medical professionals. People who strongly identify with those responsibilities are more likely to suffer burnout due to intense internal motivation to fulfill them even when it is not realistic.



Quote for the day:

"Go as far as you can see; when you get there, you'll be able to see farther." -- J. P. Morgan

Daily Tech Digest - February 22, 2024

New Wave of 'Anatsa' Banking Trojans Targets Android Users in Europe

"Initially the [cleaner] app appeared harmless, with no malicious code and its AccessibilityService not engaging in any harmful activities," ThreatFabric said. "However, a week after its release, an update introduced malicious code. This update altered the AccessibilityService functionality, enabling it to execute malicious actions such as automatically clicking buttons once it received a configuration from the C2 server," the vendor noted. The files that the dropper dynamically retrieved from the C2 server included configuration info for a malicious DEX file for distributing Android application code; a DEX file itself with malicious code for payload installation, configuration with a payload URL, and finally code for downloading and installing Anatsa on the device. The multi-stage, dynamically loaded approach used by the threat actors allowed each of the droppers that they used in the latest campaign to circumvent the tougher AccessibilityService restrictions Google implemented in Android 13, Threat Fabric said. For the latest campaign, the operator of Anatsa chose to use a total of five droppers disguised as free device-cleaner apps, PDF viewers, and PDF reader apps on Google Play.


CIO Gray Nester on fostering a culture of success

It’s easy to be courageous when you’ve already achieved more than you ever thought you would. I don’t have to be afraid to fail because I’m successful in the things that matter — my family. That’s where my love comes from. As a leader, courage and always doing what’s right equate to being honest but also being kind. There’s a difference between being honest and being truthful. As I have the opportunity to coach people, I have to deliver hard messages, and those are honest messages. I can be truthful with you and never address the opportunity to improve. So, I think courage is the willingness to say things that may not be popular but that help you achieve the goals and objectives you’re capable of achieving. We all show up here every day for something bigger than ourselves. If you believe in assuming positive intent and believe that people show up every day to be successful, then if you can give them the tough message, you have to believe they’re going to take that and do something with it because feedback is a gift. That doesn’t mean that everybody will be successful in that, but it’s our responsibility as leaders to go out and do that. That may mean saying, ‘Hey, Business, you’ve got a really bad idea, and this isn’t going to work, and let me tell you why.’ 


Navigating the Data Revolution: Exploring the Booming Trends in Data Science and Machine Learning

A significant trend in data science and machine learning revolves around incorporating artificial intelligence (AI) to drive automation. Industries across the spectrum are harnessing the potential of machine learning algorithms to streamline everyday tasks, fine-tune processes, and boost efficiency. Whether in manufacturing, healthcare, finance, or logistics, the wave of AI-powered automation is fundamentally transforming the operational landscape of businesses. ... Natural Language Processing (NLP) has taken center stage in the expansive realm of machine learning. Thanks to strides in deep learning models such as GPT-3, machines are rapidly evolving, displaying a remarkable proficiency in deciphering and generating language that mimics human expression. This transformative trend is reshaping how we engage with technology, from the intuitive responses of chatbots and virtual assistants to the seamless intricacies of language translation and content creation. ... The widespread adoption of Internet of Things (IoT) devices has triggered a notable upswing in data generation right at the edge of networks. A trend gaining significant traction is the fusion of edge computing with decentralized machine learning geared towards processing data near its source.


The Impact of Technical Ignorance

As most non-technical folks appear unable or unwilling to accept that software is hard, our responsibility – for better or worse – is to show and explain. Unique situations require adjusting the story told, but it is necessary – and never-ending – to have any chance to get the organization to understand: explaining how software is developed and deployed, demonstrating how a data-driven organization requires quality data to make correct decisions, explaining the advantages and disadvantages of leveraging open source solutions; showing examples of how open source licenses impact your organization’s intellectual property. Look for opportunities to inject background and substance when appropriate, as education is open-ended and never-ending. ... Aside from those employed in purely research and development roles, engineering/technology for engineering/technology's sake is not feasible, as technology concerns must be balanced with business concerns: product and its competitors, sales pipeline, customer support and feature requests, security, privacy, compliance, etc. 


Kubernetes Predictions Were Wrong

The view that Kubernetes would settle into quiet utility and effectively disappear while also running all our workloads failed to materialize. Nobody managed to create a single opinionated path for Kubernetes that would take care of all these choices. The simple reason for this is that the mythical one true way wouldn’t work for most applications and services. It’s impossible to create a simple, simple path without acknowledging the context of the application and organization. This is why platform engineering has gained traction. While there’s little chance of creating an industrywide path of simplified choices, creating one within an organization is perfectly feasible. A minimal viable platform could be a wiki page listing pre-baked decisions and providing a standard example for each configuration file. This might evolve into a facade that allows developers to specify what they need along a simple dimension, such as “size,” with the platform taking care of the details behind the flag. Platforms should provide simplified ways to do the right thing while letting expert developers peel back the layers when the standard approach isn’t suitable.


How DSPM Fits into Your Cloud Security Stack

DSPM solutions provide unique security capabilities and are specifically tailored to addressing sensitive data in the cloud, but also to supporting a holistic cloud security stack. As the variety and sophistication of attacks increase over time, new challenges arise that the existing security stack can hardly keep up with. A new, more aligned, and holistic inventory of security tools should be considered, consisting of identity threat protection, data-related risk reduction, privacy management, and a host of other imperative elements while ensuring continuous monitoring of any cloud asset, including CSPs, SaaS apps, File Shares, and DBaaS. However, building the most appropriate cloud security stack to do so may prove challenging in light of the numerous different – but similar-sounding – security domains in the market. DSPM tools protect data wherever it resides (IaaS, PaaS, SaaS, DBaaS, and File Shares), combined with advanced identity-centric data threat protection. They empower security teams to reduce data risk and achieve unparalleled visibility into data location, misconfiguration, comprehensive and tailored classification, access permissions, usage patterns, and potential threats, ensuring continuous data security and governance. 


Face off: Attackers are stealing biometrics to access victims’ bank accounts

Cybersecurity company Group-IB has discovered the first banking trojan that steals people’s faces. Unsuspecting users are tricked into giving up personal IDs and phone numbers and are prompted to perform face scans. These images are then swapped out with AI-generated deepfakes that can easily bypass security checkpoints The method — developed by a Chinese-based hacking family — is believed to have been used in Vietnam earlier this month, when attackers lured a victim into a malicious app, tricked them into face scanning, then withdrew the equivalent of $40,000 from their bank account. ... “These tools are relatively low cost, easily accessed and can be used to create highly convincing synthesized media such as face swaps or other forms of deepfakes that can easily fool the human eye as well as less advanced biometric solutions,” he said. ... “Organizations may begin to question the reliability of identity verification and authentication solutions, as they will not be able to tell whether the face of the person being verified is a live person or a deepfake,” writes Gartner VP analyst Akif Khan. 


Critical infrastructure attacks aren’t all the same: Why it matters to CISOs

Effectively restraining foreign adversaries would require limiting connectivity to critical infrastructure, which is only incrementally possible (via air-gapping, etc.). Better awareness of malign intentions, however, should dampen the sophistication of intrusion activity, and institutionalization of critical infrastructure preparedness and mitigation fundamentals should mitigate threat severity. From this perspective, Wray’s push to spread awareness of the PRC threat is wise, as is Canada’s attempt to pass stricter regulation of critical infrastructure operators’ security practices. One limits the discretionary conditions the Chinese need to build this capability; the other builds toward an inter-institutional apparatus that is more inherently adaptive, which should reduce the value of the capability. Stakeholders in the United States and elsewhere should double-down on efforts that conform to these parameters. From more consistent de-classification of details of critical infrastructure attacks to the publicization of critical infrastructure operator security performance outcomes, public sector stakeholders can limit the conditions under which foreign activity can find strategic value.


Report: Manufacturing bears the brunt of industrial ransomware

One of the main reasons that the manufacturing sector is so heavily targeted is because it adopted digitization at a much quicker pace compared to, for example, the water and wastewater or transportation sectors. But Lee was quick to point out that other industrial sectors are catching up to the broad digital footprint – and potential access points – of the manufacturing sector. “The manufacturing industry really went through that quote unquote, digital transformation and connectivity very quickly. As a result of not investing in IoT security when they did that, we’re seeing a lot of ransomware cases, a lot of activists, criminals, etc., disrupting manufacturing,” Lee said. “Far more than gets reported publicly.” The manufacturing sector, Lee said, still struggles with segmenting networks like those that deal with human resources from operational technology networks that control operations, which can allow a hacker broad access to the organization. However, that trend is spreading to other sectors, such as water and wastewater, Lee warned. He expects an increase of ransomware attacks on water and other utilities as digitization becomes more common.


4 Steps to Achieving Operational Flow and Improving Quality in Tech Teams

Removing dependencies is often a lot of work. Dependencies are often the result of specialist knowledge that resides in another part of the organisation, or past architectural choices. It often feels like the dependencies are inevitable and inescapable. There’s a lot of truth to the idea that removing dependencies will be painful and time-consuming, but they only have to be removed once, at which point the team never has to deal with that dependency again. It’s an investment today in order to get better results tomorrow. ... Rather than arranging teams in functional silos, arrange them so they can deliver value independently. This arrangement then allows more work to move through the system simultaneously, because the different work doesn’t create delays for other teams. Each of the above contributes to improving flow. But what about improving quality? The interesting thing is that each of the steps above improves quality, too. By doing fewer things at once, the reduced cognitive load will make it easier for the team to produce higher quality work, while reduced context switching makes it less likely they’ll miss something important. 



Quote for the day:

''To do great things is difficult; but to command great things is more difficult.'' -- Friedrich Nietzsche

Daily Tech Digest - February 21, 2024

The Top 5 Kubernetes Security Mistakes You’re Probably Making

Kubernetes configurations are primarily defined using YAML files, which are human-readable data serialization standards. However, the simplicity of YAML is deceptive, as small errors can lead to significant security vulnerabilities. One common mistake is improper indentation or formatting, which can cause the configuration to be applied incorrectly or not at all. ... The ransomware attack on the Toronto Public Library revealed the critical importance of network microsegmentation in Kubernetes environments. By limiting network access to necessary resources only, microsegmentation is pivotal in preventing the spread of attacks and safeguarding sensitive data. ... eBPF is the basis for creating a universal “security blanket” across Kubernetes clusters, and is applicable on premises, in the public cloud and at the edge. Its integration at the kernel level allows for immediate detection of monitoring gaps and seamless application of security measures to new and changing clusters. eBPF can automatically apply predefined security policies and monitoring protocols to any new cluster within the environment.


Error-correction breakthroughs bring quantum computing a step closer

The best strategy, says Sam Lucero, chief quantum analyst at Omdia, would be to combine multiple approaches to get the error rates down even further. ... The bigger question is which type of qubit is going to become the standard – if any. “Different types of qubits might be better for different types of computations,” he says. This is where early testing can come in. High-performance computing centers can already buy quantum computers, and anyone with a cloud account can access one online. Using quantum computers via a cloud connection is much cheaper and quicker. Plus, it gives enterprises more flexibility, says Lucero. “You can sign on and say, ‘I want to use IonQ’s trapped ions. And, for my next project, I want to use Regetti, and for this other project, I want to use another computer.’” But stand-alone quantum computers aren’t necessarily the best path forward for the long term, he adds. “If you’ve got a high-performance computing capability, it will have GPUs for one type of computing, quantum processing units for another type of computing, CPUs for another type of computing – and it’s going to be transparent to the end user,” he says. “The system will automatically parcel it out to the appropriate type of processor.”


Is hybrid encryption the answer to post-quantum security?

One of the biggest debates is how much security hybridization offers. Much depends on the details and the algorithm designers can take any number of approaches with different benefits. There are several models for hybridization and not all the details have been finalized. Encrypting the data first with one algorithm and then with a second combines the strength of both, essentially putting a digital safe inside a digital safe. Any attacker would need to break both algorithms. However, the combinations don’t always deliver in the same way. For example, hash functions are designed to make it hard to identify collisions, that is two different inputs that produce the same output: (x_1 and x_2, such that h(x_1)=h(x_2)). If the input of the first hash function is fed into a second different hash function (say g(h(x))), it may not get any harder to find a collision, at least if the weakness lies in the first function. If two inputs to the first hash function produce the same output, then that same output will be fed into the second hash function to generate a collision for the hybrid system: (g(h(x_1))= g(h(x_2)) if h(x_1)=h(x_2)). Digital signatures are also combined differently than encryption. One of the simplest approaches is to just calculate multiple signatures independently from each other. 


By elevating partners’ service capabilities, we ensure they offer a comprehensive cybersecurity solution to enterprises in today’s dynamic threat landscape

The MSSPs have a significant opportunity for growth, with an increasing number of partners showing interest in this domain. What’s notable is that our focus isn’t solely on partners delivering network security solutions but also extends to other offerings. For instance, our SIEM solutions now feature a consumption-based model, attracting more partners to explore the realm of MSSP partnerships. This trend has already gained momentum over the past year, indicating a promising trajectory for the future. As the market continues to expand, catering to a diverse range of customers across various sizes and sectors, the demand for managed security services will only intensify. Here, our integrator partners play a crucial role, positioned to capitalise on the growing requirements of clients. Moreover, selected MSSP partners have the opportunity to develop specialised services around Fortinet solutions, leveraging programs like FortiDirect, FortiEDR, FortiWeb, and FortiMail. Our offerings, such as the MSSP Monitor program and Flex VM program, provide flexible consumption models tailored to the evolving needs of MSP partners. 


Early adopters’ fast-tracking gen AI into production, according to new report

One in four organizations say gen AI is critically important to gaining increased productivity and efficiency. Thirty percent say improving customer experience and personalization is their highest priority, and 26% say it’s the technology’s potential to improve decision-making that matters most. ... “The generative AI phenomenon has captured the attention of the market—and the world—with both positive and negative connotations,” said Howard Dresner, founder, and chief research officer at Dresner Advisory. “While generative AI adoption remains nascent in the near term, a strong majority of respondents indicate intentions to adopt it early or in the future.” ... Nearly half of organizations consider data privacy to be a critical concern in their decision to adopt gen AI. Legal and regulatory compliance, the potential for unintended consequences, and ethics and bias concerns are also significant. Less than half of respondents—46% and 43%, respectively—consider costs and organizational policy important to generative AI adoption. Weaponized LLMs and attacks on chatbots fuel fears over data privacy. More organizations are fighting back and using gen AI to protect against chatbot leaks.


AI and data centers - Why AI is so resource hungry

Is it the data set, i.e. volume of data? The number of parameters used? The transformer model? The encoding, decoding, and fine-tuning? The processing time? The answer is of course a combination of all of the above. It is often said that GenAI Large Language Models (LLMs) and Natural Language Processing (NLP) require large amounts of training data. However, measured in terms of traditional data storage, this is not actually the case. ... It is thought that ChatGPT-3 was trained on 45 Terabytes of Commoncrawl plaintext, filtered down to 570GB of text data. It is hosted on AWS for free as its contribution to Open Source AI data. But storage volumes, the billions of web pages or data tokens that are scraped from the Web, Wikipedia, and elsewhere then encoded, decoded, and fine-tuned to train ChatGPT and other models, should have no major impact on a data center. Similarly, the terabytes or petabytes of data needed to train a text-to-speech, text to image or text-to-video model should put no extraordinary strain on the power and cooling systems in a data center built for hosting IT equipment storing and processing hundreds or thousands of petabytes of data.


Making cloud infrastructure programmable for developers

Just as software-oriented architecture (SOA) evolved application architecture from monolithic applications into microservices patterns, IaC has been the slow-burn movement that is challenging what the base building blocks should be for how we think of cloud infrastructure. IaC really got on the map in the 2010s, when Puppet, Chef, and Ansible introduced IaC methods for the configuration of virtual machines. Chef was well-loved for allowing developers to use programming languages like Ruby and for the reuse and sharing that came with being able to use the conventions of a familiar language. During the next decade, the IaC movement entered a new era as the public cloud provider platforms matured, and Kubernetes became the de facto cloud operating model. HashiCorp’s Terraform became the IaC poster child, introducing new abstractions for the configuration of cloud resources and bringing a domain-specific language (DSL) called HashiCorp Configuration Language (HCL) designed to spare developers from lower-level cloud infrastructure plumbing.


A cloud-ready infra: Fundamental shift in how new-age businesses deliver value to customers

Cloud computing has emerged as a robust and secure platform for data storage, offering unparalleled protection against extreme conditions and disasters. Today’s cloud-based providers offer robust security and disaster recovery capabilities, ensuring the safety and integrity of critical data assets. ... This includes empowering doctors and nurses to access patient records securely on their own devices and facilitating remote consultations through virtual desktop infrastructure (VDI). This instant access has transformed the way healthcare professionals interact with patient data, allowing doctors to review charts on tablets during rounds and nurses to retrieve medication histories from any workstation. By storing data on secure servers rather than end-client devices, cloud-based solutions guarantee the protection of critical medical records in the event of theft or compromise of an end device. ... This approach not only ensures data security but also meets the stringent requirements of healthcare institutions while allowing for scalable systems connected to the hospital’s network.


The Paradox of Productivity: How AI and Analytics are Shaping the Future of Work-Life Balance

One of the key challenges we face is managing time effectively in an environment where the line between ‘on’ and ‘off’ hours is increasingly fuzzy. AI-powered tools and analytics can generate insights and tasks round-the-clock, leading to an ‘always-on’ work culture. This can encroach upon personal time, making it challenging to disconnect and potentially causing stress and burnout. Maintaining mental health in this context is paramount. It is incumbent upon companies to ensure that the implementation of AI and analytics tools does not exacerbate workplace stress. Instead, these tools should be leveraged to promote a healthier work-life balance by automating routine tasks, predicting workload peaks, and enabling flexible working arrangements. Achieving personal fulfillment in the age of AI also means embracing lifelong learning. As the nature of work evolves, so too must our skillsets. Upskilling and reskilling become not just a means to professional advancement but also an opportunity for personal growth and satisfaction. Analytics can play a role here in identifying skill gaps and learning opportunities that align with individual career paths and interests.


The importance of a good API security strategy

Hackers love exploiting APIs for many reasons, but mostly because they let them bypass security controls and access sensitive company and customer data easily, as well as certain functionalities. A recent incident involving a publicly exposed API of social media platform Spoutible could have ended in attackers stealing users’ 2FA secrets, encrypted password reset tokens, and more. This type of incident can result in a loss of customer and business partners’ trust, consequently leading to financial loss and a drop in brand value. Poor API security practices can also have regulatory and legal consequences, cause disruption to company operations and even result in intellectual property theft. ... A good API security strategy is essential for every organization that wants to keep its digital assets safe and protect sensitive customer data. OWASP constantly updates its list of the top 10 API security threats. While security practitioners mustn’t rely solely on this data, the list is still an essential tool when planning a security strategy that will hold up. Adhering to the NIST Cybersecurity Framework is also an essential step in planning a good API security strategy. 



Quote for the day:

"The signs of outstanding leadership are found among the followers." -- Max DePree