Daily Tech Digest - May 07, 2024

How generative AI is redefining data analytics 

When applied to analytics, generative AI:Streamlines the foundational data stages of ELT: Predictive algorithms are applied to optimize data extraction, intelligently organize data during loading, and transform data with automated schema recognition and normalization techniques. Accelerates data preparation through enrichment and data quality: AI algorithms predict and fill in missing values, identify and integrate external data sources to enrich the data set, while advanced pattern recognition and anomaly detection ensure data accuracy and consistency. Enhances analysis of data, such as geospatial and autoML: Mapping and spatial analysis through AI-generated models enable accurate interpretation of geographical data, while automated selection, tuning, and validation of machine learning models increase the efficiency and accuracy of predictive analytics. Elevates the final stage of analytics, reporting: Custom, generative AI-powered applications provide interactive data visualizations and analytics tailored to specific business needs. 


Open-source or closed-source AI? Data quality and adaptability matter more

Licensing and usage terms of services matter in that they dictate how you use a particular model — and even what you use it for. Even so, getting caught up in the closed vs. open zealotry is shortsighted at a time when 70% of CEOs surveyed expect gen AI to significantly alter the way their companies create, deliver and capture value over the next three years, according to PwC. Rather, you should focus on the quality of your data. After all, data will be your competitive differentiator — not the model. ... Experimenting with different model types and sizes to suit your use cases is a critical part of the trial-and-error process. Right-sizing, or deploying the most appropriate model sizes for your business, is more crucial. Do you require a broad, boil-the-ocean approach that spans as much data as possible to build a digital assistant with encyclopedic knowledge? A large LLM cultivating hundreds of billions of data points may work well. ... Of course, the gen AI model landscape is ever evolving. Future models will look and function differently than those of today. Regardless of your choices, with the right partner you can turn your data ocean into a wellspring of insights.


Tips for Building a Platform Engineering Discipline That Lasts

A great platform engineer is defined both by their ability to create infrastructure and advocate for and guide others (which is where communication skills come in) — especially in the platforms that are maturing today. As far as hard skills go, the platform engineer should have experience in cloud platforms, CI/CD, IaC, security, and automation. Other roles you’ll need include a product owner to manage platform stakeholders and track KPIs. Our 2024 State of DevOps report found that 70% of respondents said a product manager was important to the platform team – 52% of whom called the role “critical”. To avoid complexity and scaling issues, you’ll also need architects with the vision and skills to help the platform engineering team design and build it. Infrastructure as code (IaC) is version control for your infrastructure. It makes infrastructure human-readable, auditable, repeatable, scalable, and securable. IaC also lets disparate teams — developers, operations, and QA — review, collaborate, iterate, and maintain infrastructure code simultaneously. 


What Is the American Privacy Rights Act, and Who Supports It?

The APRA ostensibly is about data, but AI is also covered a bit. Companies must evaluate their “covered algorithms” before deploying it and provide that evaluation to the FTC and the public. Companies must also adhere to people’s request to opt out of the use of any algorithm related to housing, employment, education, health care, insurance, credit, or access to places of public accommodation. The APRA would be enforced by a new bureau operating under the Federal Trade Commission (FTC). State attorneys general would also be able to enforce the new law. It would also allow individuals to file private lawsuits against companies that violate the law. There are several important exceptions in the APRA. For instance, small businesses, defined as having less than $40 million in annual revenue or collecting data on 200,000 or fewer individuals (as long as they’re not in the data-selling business themselves), are exempt from the APRA’s requirements. Governmental agencies and organizations working for them are also exempt, in addition to non-profit organizations whose main purpose is fraud-fighting, as well. 


Empowering Users: Embracing Product-Centric Strategies in SaaS

A non-negotiable requirement for a SaaS product to succeed with a product-centric strategy is for it to be intuitively designed with minimal friction and a focus on delivering value as quickly as possible. This is not a set-and-forget task demanding a profound understanding of the critical user journey and ruthlessly prioritizing friction and pain point elimination instead of just plastering feature promotions through in- and out-of-product interventions. However, this cannot be done if teams don’t use data analytics or prioritize the voice of the customer through feedback loops to further product development and work towards building a loved and delightful product. A great example of a PLG pioneer is Figma.  ... On the other hand, adopting a product-led growth approach requires fundamental organizational shifts. The success of PLG requires a combined, multidisciplinary team dedicated to continuous improvement and adaptation of the product to support both new customer acquisition as well as retention and growth.


6 tips to implement security gamification effectively

Gamification leverages elements of traditional gaming, online and offline, to boost engagement and investment in the learning process. Points, badges, and leaderboards reward successful actions, fostering a sense of achievement and friendly competition. Engaging scenarios and challenges simulate real-world threats, allowing trainees to apply knowledge practically. Difficulty levels keep learners engaged, while immediate feedback on decisions solidifies learning and highlights areas for improvement. Effective implementation hinges on transparency, simplicity, and a level playing field. A central dashboard that displays the same security data for everyone keeps things simple, fostering a shared understanding of progress. ... Personalized challenges help ensure engagement. New security teams might focus on mastering foundational tasks like vulnerability scans, while seasoned teams tackle advanced challenges like reducing time for response to critical security events. This keeps everyone motivated and learning, while offering continuous improvement for the entire team.


Rethinking ‘Big Data’ — and the rift between business and data ops

Just as data scientists need to think more like businesspeople, so too must businesspeople think more like data scientists. This goes to the issue of occupational identity. Executives need to expand their professional identities to include data. Data professionals need to recognize that DI (changes in information) do not necessarily equate to DB (changes in behavior). Going forward data professionals are not just in the information/insight delivery business, they are in the “create insight that drives value creating behavior” business. The portfolio of tools available now have democratized the practice of data science. One no longer needs to be a math genius or coding phenom to extract value from data — see Becoming a Data Head: How to Think, Speak, and Understand Data Science, Statistics, and Machine Learning by Alex J. Gutman, Jordan Goldmeier. ... Executives need ready access to data professionals to guide their use of data power tools. Data professionals need to be embedded in the business rather than quarantined in specialized data gulags.


The Technical Product Owner

There is a risk that the technical Product Owner or product manager might no longer focus on the “why” but start interfering with the “how,” which is the Developers’ domain. Otherwise, a technical Product Owner might help the Developers understand the long-term business implications of technical decisions made today. ... A technical Product Owner would be highly beneficial when the product involves complex technical requirements or relies heavily on specific technologies. For example, in projects involving intricate software architecture or specialized domain knowledge, a technical Product Owner can provide valuable guidance, facilitate more informed decision-making, and effectively communicate with the Developers. This deep technical understanding can lead to better solutions, improved product quality, and increased customer satisfaction, especially in industries with critical technical expertise, such as software development or engineering. 


The digital transformation divide in Europe’s banking industry

Europe’s digital divide is a product of typical characteristics: internet connectivity, digital literacy, the availability of smartphones and digital devices. Disparities in broadband access in urban and rural communities remain stubbornly persistent. According to Eurostat, around 21% of rural households in the European Union do not have access to broadband internet, compared to only 2% of urban households. In Romania, which ranked lowest on the EU’s Digital Economy and Society Index in 2022, the market is dominated by incumbent banks. Only 69.1% of adults hold a bank account, pointing to low levels of financial literacy and inclusion – underpinned by a preference for a cash economy. In contrast, the UK has a rate of over 60% fintech adoption growth according to data from Tipalti, and Lithuania has established itself as an impressive fintech ecosystem backed by the nation’s central bank. However, it is too simplistic to reduce the digital divide to regional disparities, as the starker differences lie between countries themselves.


Why AMTD Is the Key to Stopping Zero-Day Attacks

AMTD technology uses polymorphism to create a randomized, dynamic runtime memory environment. Deployable on endpoints and servers, this polymorphism ability creates a prevention-focused solution that constantly moves system resources while leaving decoy traps in their place. What occurs next is that threats see these decoy resources where real ones should be and end up trapped. For users, it’s business as usual because as they don't notice any difference—system performance is unaffected while security teams gain a new layer of preventative telemetry. Today, more and more companies are turning to AMTD technologies to defeat zero days. In fact, industry analysts like Gartner suggest that AMTD technology is paving the way for a new era of cyber defense possibilities. That’s because instead of trying to detect zero-day compromise, these technologies prevent exploits from deploying in the first place. Against zero-day attacks, this is the only defensive approach organizations can rely on.



Quote for the day:

"Always remember, your focus determines your reality." -- George Lucas

Daily Tech Digest - May 06, 2024

No. 1 Reason The ‘Shadow IT’ Trend Is Benefiting Careers But Hampering Companies

Kirimli agrees that while on the surface this trend might seem beneficial, there’s a downside. “While ‘Shadow IT’ might seem like a quick solution for faster project delivery, its drawbacks are too significant to ignore,” he declares. “Safety standards are compromised, recovering accounts becomes a nightmare and company resources are inefficiently utilized.” ... “Convenience is a major reason that shadow IT occurs,” according to Vineet Jain, the co-founder and CEO of Egnyte, a data security startup in Silicon Valley. “Employees sometimes use unauthorized apps or services that make their lives easier—regardless of whether their employer has approved their use of those apps or services. For example, in the early 2000s, cloud technology was becoming more prominent as a way to let workers access the same files and services from any device or share heavy files that couldn't be attached to emails. Before cloud technology went mainstream, employees were using their own software of choice to access the cloud, bypassing IT in doing so. The cloud eventually got so mainstream that it became a corporate initiative.”


AI governance and cybersecurity certifications: Are they worth it?

When the certifications are combined with a strong history of putting it into action, then you get a strong competitive advantage. Once formal standards are released, Bartoletti expects to see a lot more certifications coming out, covering specific topics like how to comply with the EU AI Act, or with NIST, or with other rules and regulations. “I think there will also be a lot of attention on specific sectors, like governing AI in healthcare, or in financial services.” Certifications like the AIGP are particularly valuable for consultants, agrees Steve Ross, director of cybersecurity for the Americas at S-RM Intelligence and Risk Consulting. “Our clients are feeling the uncertainty,” Ross tells CSO. “They would like to increase the use of AI but nobody knows how to use it safely, securely, ethically — and are looking for someone they can trust to do that.” As a result, clients will be looking for the certifications over the next two or three years. “I don’t have one of these certifications, but am thinking of pursuing them,” Ross adds, such as the AIGP certification, which he finds interesting. 


Tech’s new arms race: The billion-dollar battle to build AI

Nevertheless, smaller models may be right for certain applications, especially those where complete knowledge across multiple data domains is not needed. For example, an SLM can be used to fine-tune company-specific data and jargon to provide accurate and personalized responses to customer queries. Or, one could be trained using data for a specific industry or market segment or used to generate comprehensive and tailored research reports and answers to queries. As Rowan Curran, a senior AI analyst at Forrester Research said recently about the different language model options, “You don’t need a sportscar all the time. Sometimes you need a minivan or a pickup truck. It is not going to be one broad class of models that everyone is using for all use cases.” ... Just as rising costs have historically restricted the number of companies capable of building high-end semiconductors, similar economic pressures now shape the landscape of large language model development. These escalating costs threaten to limit AI innovation to a few dominant players, potentially stifling broader creative solutions and reducing diversity in the field. 


Strategies for preventing AI misuse in cybersecurity

No technology, including AI, is inherently good or bad. It’s all about how we use them. And yes, while AI is very powerful in helping us speed up everyday tasks, the bad guys can use it to do the same. We will see phishing emails that are more convincing and more dangerous than ever before thanks to AI’s ability to mimic humans. If you combine that with multi-modal AI models that can create deepfake audio and video, it’s not impossible that we’ll need two-step verification for every virtual interaction with another person. It’s not about where the AI technology is today, it’s about how sophisticated it gets in a few years if we remain on this same trajectory. Fighting these sophisticated threats requires equally advanced AI-driven behavioral analytics to spot anomalies in communication and AI-augmented digital content verification tools to spot deepfakes. Threat intelligence platforms that utilize AI to sift through and analyze vast amounts of data to predict and neutralize threats before they strike are another robust defense. However, tools are limited in their usefulness. I believe we will see the rise of in-person and face-to-face interactions for highly sensitive workflows and data.


9 in 10 Attacks Start With Phishing: Can AI Save The Day? Expert Analysis

Josh Amishav Founder and CEO at Breachsense told Techopedia that LLMs, like those used in generative AI for anti-spam, offer significant advantages over traditional methods by being able to understand and generate human-like text. This helps them better identify subtle nuances and variations in phishing tactics. “Unlike rule-based systems, LLMs can generalize from past data to recognize and react to previously unseen types of spam and phishing attacks. This adaptability allows them to continuously improve their detection accuracy via continuous training to update their models. “Generative AI will certainly enhance anti-spam and phishing protection by leveraging predictive capabilities and natural language understanding to better identify and block sophisticated threats,” Amishav said. “Having said that, there are a number of techniques threat actors could use to evade detection. “LLMs are only as good as the models they’re trained on. If attackers can poison the training model or develop strategies that bypass the model’s ability to properly classify the attack then the effectiveness of these systems could be compromised.”


Unlocking Business Potential: Leveraging Data Protection for Growth and Privacy

The modern competitive business landscape demands businesses to differentiate themselves from others in every aspect of customer satisfaction. And data protection plays a crucial role in navigating the wonderful journey of lasting customer relationships. Your customers actively seek service providers prioritizing their privacy, security, and consent as a part of their business strategy. This means if a business needs to thrive in a highly competitive landscape, it must make data protection a core element of its value proposition. Whether it’s about investing in tools and technologies or implementing stringent privacy policies, everything contributes to enhancing customer trust and brand reputation in the long run. Remember, when customers perceive a business as a trusted guardian of their sensitive information, they’re more likely to choose its services and products over those of competitors. When we talk about data protection, it’s not just about a legal obligation; it’s a strategic imperative for enterprises aiming to unlock diverse growth opportunities. Hence, prioritizing data protection not only lays a strong foundation of trust, it also enables customers to stay loyal to a brand.


5 tips for building highly scalable cloud-native apps

To build systems that are highly available and horizontally scalable, you need an architecture that is built using scalable and composable building blocks. Concretely, the work done by a scalable system should grow linearly with the increase in system size. The original Kafka architecture does not fulfill this criteria because many aspects of load increase non-linearly with the system size. For instance, as the cluster size increases, the number of connections increases quadratically, since all clients typically need to talk to all the brokers. Similarly, the replication overhead also increases quadratically, since each broker would typically have followers on all other brokers. The end result is that adding brokers causes a disproportionate increase in overhead relative to the additional compute/storage capacity that they bring. A second challenge is ensuring isolation between tenants. In particular, a misbehaving tenant can negatively impact the performance and availability of every other tenant in the cluster. Even with effective limits and throttling, there will likely always be some load patterns that are problematic.


Implementing digital evaluation systems: Best practices for educators

The implementation of digital evaluation systems is an iterative process that necessitates continuous evaluation and improvement. Board/University should frequently examine the platform’s performance in meeting predefined goals and fix any difficulties or challenges that occur. Soliciting comments from evaluators and administrators can provide useful information for enhancing system usability, functionality, and user experience. The Boards/Universities can maintain the long-term sustainability and relevance of the digital evaluation system by constantly analysing and developing it to improve result processing. Digital evaluation goes beyond grading; it is a transformative tool for educational excellence and legacy building. It enables Boards/Universities to carefully review student answer booklets, ensuring accuracy and fairness in evaluation. Streamlining administrative processes speeds up the publication of results while maintaining security and efficiency. This is more than just promoting student success; it is about continuous improvement, data security, and personalised evaluation experiences. 


MongoDB and Google Cloud Collaborate to Optimise Gemini Code Assist for Developers Building Applications on MongoDB

Developers today want to integrate generative AI-powered coding assistants to automate parts of their day-to-day workflows so they can significantly increase their productivity and focus on difficult problems. Gemini Code Assist is trained on publicly available datasets, has full codebase awareness, and integrates with popular applications, source-code repositories, and code editors. MongoDB and Google Cloud have partnered closely to help improve Gemini Code Assist responses, bringing a greater depth of intelligence for developers on MongoDB. Through this integration, developers can get enhanced suggestions for MongoDB to help them more quickly build and modernize their applications, and easily access highly curated content and code from MongoDB documentation, detailed use cases, and common tasks with best practices that developers encounter when working with data on MongoDB. Consequently, Gemini Code Assist will help developers more quickly write high-quality code when building data aggregations, performing database operations, and accelerating migration of applications to MongoDB for modernization.


Eight Overlooked Emerging Tech Risks and How to Mitigate Them

When emerging technology doesn’t align with a company’s strategic goals, it can lead to poor ROI and low stakeholder buy-in. For example, a manufacturing company that installs IoT sensors to monitor equipment health without integrating them into existing maintenance strategies risks unplanned downtime, increased costs and reduced productivity. To avoid these risks, ensure that emerging technology aligns with the company’s strategic vision. This requires choosing technology that supports business objectives and planning for change management, workforce training and adequate budget. ... When the board of directors and executive leadership lack an understanding of emerging technologies, it can impact decision-making. ... Treating emerging technology as a tool for incremental improvement rather than business transformation can limit success. A traditional retailer embracing e-commerce, for example, needs to go beyond launching an online store by incorporating data analytics and AI for personalized marketing to realize full benefits. These changes improve the customer experience, streamline operations and open new growth opportunities. 



Quote for the day:

“The greatest discovery of all time is that a person can change their future by merely changing their attitude.” -- Oprah Winfrey

Daily Tech Digest - May 05, 2024

Building Resilient and Secure FinTech Infrastructure: Strategies for DDoS Mitigation and Cybersecurity

Companies may have to pay anything from $1 million to over $5 million for every hour of downtime, not to mention any further fines, fees, or penalties under the law. It is in addition to higher DDoS security investments made after the fact and higher cyber insurance premiums. As a result, DDoS victims may have to pay for ransomware. However, this isn’t a fix. It does not ensure that a DDoS assault won’t occur again. FinTech businesses need to be proactive if they want total DDoS resilience. Regardless of the security services they use, organizations are extremely susceptible to denial-of-service (DDoS) assaults. The only way to withstand such attacks is to implement non-disruptive DDoS testing and obtain uninterrupted and comprehensive insight into the DDoS security posture. Continuous DDoS testing on live settings is necessary for FinTech firms and their DDoS protection vendors to identify vulnerabilities, prioritize remediation, and ensure that the solutions are applied appropriately. Staying ahead of the threat curve means taking a preventive rather than a reactive approach to safeguarding online services against DDoS attacks.


Data Governance Act: Understanding the Cross-Sectoral Instrument

The Data Governance Act sets out guidelines for the utilization within the EU for the data held by the public sector entities. This data is secured and protected for reasons such as commercial confidentiality, third-party intellectual property rights and the protection of personal data. While the Act does not explicitly detail the circumstances under which it is applied to the foreign organizations, there are several provisions which imply its extraterritorial implications. Any non-EU entity which provides services within the European Union and qualifies as a data altruism company or intermediary must appoint a legal representative in every member state it is operating in. ... The Act complements the Open Data Directive by addressing the re-use of the secured data which is not under the functions of the latter. It establishes several secured walls for the re-utilization of such data held by the public sector bodies including the governmental and other public sector entities. The Act does not obligate the public sector bodies to permit the reuse of data but sets conditions for any such authorization. 


Six Data Quality Dimensions to Get Your Data AI-Ready

Compliance: The degree to which data is in accordance with laws, regulations, or standards. How is the use of your data now changing? Do you need to uphold your data to higher standards and different requirements in these new use cases? Consider also: A true disruptor like GenAI may result in the need for new policy, and therefore, new regulation. Can you anticipate future regulation around AI usage given your industry and data? ... Accessibility: The ease with which data can be consulted or retrieved. Scalability and repeatability require consistent accessibility. Is your data reliably accessible by the right people and technologies? How accessible must it be? If your data was temporarily inaccessible, how damaging would that be? What is the acceptable threshold that would ensure your project will succeed? Access Security: The degree to which access to datasets is restricted. Consider the privileges and permissions to your data and the implications. Are you building an AI tool in-house or are you using a service? Which of your company’s data are you willing to provide third party access to? Ensure that you are not sharing data that you cannot or should not share. 


Want to drive more secure GenAI? Try automating your red teaming

When red-teaming GenAI, manual probing is a time-intensive but necessary part of identifying potential security blind spots. However, automation can help scale your GenAI red teaming efforts by automating routine tasks and identifying potentially risky areas that require more attention. At Microsoft, we released the Python Risk Identification Tool for generative AI (PyRIT)—an open-access framework designed to help security researchers and ML engineers assess the robustness of their LLM endpoints against different harm categories such as fabrication/ungrounded content like hallucinations, misuse issues like machine bias, and prohibited content such as harassment. PyRIT is battle-tested by the Microsoft AI Red Team. It started off as a set of one-off scripts as we began red teaming GenAI systems in 2022, and we’ve continued to evolve the library ever since. Today, PyRIT acts as an efficiency gain for the Microsoft AI Red Team—shining a light on risk hot spots so that security professionals can then explore them. This allows the security professional to retain control of the AI red team strategy and execution. 


A Novel AI Approach to Enhance Language Models: Multi-Token Prediction

The researchers behind this study propose a new technique called multi-token prediction. Instead of predicting one token (word) at a time, this method trains the model to predict multiple future tokens simultaneously. Imagine it like this: While learning a language, instead of guessing one word at a time, you’re challenged to predict entire phrases or even sentences. Sounds intriguing, right? So, how does this multi-token prediction work? The researchers designed a model architecture with a shared trunk that produces a latent representation of the input context. This shared trunk is then connected to multiple independent output heads, each responsible for predicting one of the future tokens. For example, if the model is set to predict four future tokens, it will have four output heads working in parallel. During training, the model is fed a text corpus, and at each position, it is tasked with predicting the next n tokens simultaneously. This approach encourages the model to learn longer-term patterns and dependencies in the data, potentially leading to better performance, especially for tasks that require understanding the broader context.


Uncomplicating the complex: How Spanner simplifies microservices-based architectures

Sharding is a powerful tool for database scalability. When implemented correctly, it can enable applications to handle a much larger volume of read and write transactions. However, sharding does not come without its challenges and brings its own set of complexities that need careful navigation. ... Over time, database complexity can grow along with increased traffic, adding further toil to operations. For large systems, a combination of sharding along with attached scale-out read replicas might be required to help ensure cost-effective scalability and performance. This combined dual-strategy approach, while effective in handling increasing traffic, significantly ramps up the complexity of the system's architecture. The above illustration captures the need to add scalability and availability to a transactional relational database powering a service. ... We want to emphasize that we’re not arguing that Spanner is only a good fit for microservices. All the things that make Spanner a great fit for microservices also make it great for monolithic applications.


VPNs aren't invincible—5 things a VPN can't protect you from

A VPN can deter a hacker from trying to intercept your internet traffic, but it cannot prevent you from landing on a scam website yourself or sharing your personal details with someone on the web. Also, thanks to AI-powered tools, attackers can craft increasingly convincing messages at high speed. This means phishing attacks will keep happening in the future. The good news is that, despite how well-made the messages are, you can always spot a scam. As a rule of thumb, if something is too good to be true it generally is—so, beware of grand promises. ... While a VPN keeps you more anonymous online, preventing some forms of tracking, it only works at a network level. Tracking cookies, though, are stored directly on your web browser. Hence, VPNs aren't much of a help against such trackers. To mitigate the risks, I recommend clearing the internet cookies on your devices on a regular basis. ... As we have seen, VPNs are not a magic wand that'll magic away cyber threats and danger. Nonetheless, this software still protects you from a great deal of risks and strongly enhances your digital posture—so, all in all, VPNs are still vital pieces of security equipment.


Top Digital Transformation Themes and EA Strategies

Enterprise Architects (EAs) face significant challenges in engaging business stakeholders. One of the main difficulties is shifting the perception of the EA platform from a tool imposed by the EA team to a collaborative instrument that benefits the entire business. EA teams also have a tendency to focus inwardly, leaning on technical architectural terminology and objectives that other business units may not understand or resonate with. ... Architects often struggle to communicate the ROI of Enterprise Architecture or link architectural initiatives and value directly to the business's overarching goals and metrics. A common hurdle is the traditional IT-centric approach of EA, which may not align with the dynamic needs of the business. ... The segregation of data into siloes hampers transparency and makes it difficult to achieve a comprehensive overview of the organization’s data landscape. So when architects try to bring all this information together, the complexity often results in bottlenecks as they struggle to manage and govern IT effectively. This severely limits the potential for insights and efficiency across the organization. 


How To Build a Scalable Platform Architecture for Real-Time Data

Many platforms enable autoscaling, like adjusting the number of running instances based on CPU usage, but the level of automation varies. Some platforms offer this feature inherently, while others require manual configuration, like setting the maximum number of parallel tasks or workers for each job. During deployment, the control plane provides a default setting based on anticipated demand but continues to closely monitor metrics. It then scales up the number of workers, tasks or instances, allocating additional resources to the topic as required. ... Enterprises prioritize high availability, disaster recovery and resilience to maintain ongoing operations during outages. Most data-streaming platforms already have robust guardrails and deployment strategies built in, primarily by extending their cluster across multiple partitions, data centers and cloud-agnostic availability zones. However, it involves trade-offs like increased latency, potential data duplication and higher costs. Here are some recommendations when planning for high availability, disaster recovery and resilience.


How will AI help democratise intelligence in algorithmic trading? Here are 5 ways

AI has played a significant role in making algorithmic trading more accessible to the masses. The emergence of open AI sources like ChatGPT, and open source AI models like Llama 3 has made AI technology available to anyone with an internet connection at a mere $20 a month or so cost. This has led traders to gain access to algorithms, creation of algorithms, that were previously only available to large institutions. Additionally, many platforms have made it possible for traders to automate their trades without requiring coding knowledge, thanks to the contribution of AI in algo trading. ... Algorithms powered by market research and AI can significantly reduce trading errors caused by emotions and impulsive decisions. Traditional trading methods rely heavily on expertise, intuition, and precision, whereas AI-powered algorithms eliminate the need for these factors and enhance the accuracy, efficiency, and overall performance of trades. These algorithms can be customised to fit various market situations, whether it's a stable or volatile market.



Quote for the day:

"Courage doesn't mean you don't get afraid. Courage means you don't let fear stop you." -- Bethany Hamilton

Daily Tech Digest - May 04, 2024

We Need an Updated Strategy to Secure Identities

Identity needs to be foremost in any security strategy since we’ve ample evidence it remains a frequent target. Most breaches today originate with identity through human error, social engineering or phishing. Solutions providers like Cisco are offering cybersecurity tools that bring together the worlds of identity, networking and security to detect and prevent these identity threats. Solutions like Cisco Duo, Cisco Identity Intelligence, and Cisco Secure Access can minimize exposure. Cisco Duo protects access to applications and data with strong multi-factor authentication, while Cisco Secure Access emphasizes secure remote connectivity to prevent unsanctioned users from gaining access. Additionally, Cisco Identity Intelligence uses AI to analyze user behavior and identity data to proactively clean up vulnerable identities and to detect identity-based security threats. Most organizations use a variety of solutions collected over the years that now reside in the cloud, on premises or in hybrid environments. That’s why an platform approach is so important. It also needs to be easy to deploy and easy for end users to manage.


What is cybersecurity mesh architecture (CSMA)?

Cybersecurity mesh architecture (CSMA) is a set of organizing principles used to create an effective security framework. Using a CSMA approach means designing a security architecture that is composable and scalable with easily extensible interfaces, a common data schema and well-defined interfaces and APIs for interoperability. ... A CMSA proactively blocks attacks through a variety of controls and system design principles. Leveraging advanced machine learning for anomaly detection and employing Secure Access Service Edge (SASE) for dynamic, secure cloud access, CSMA ensures robust encryption standards for data at rest and in transit. Network segmentation and micro-segmentation, paired with continuous authentication and strict authorization, can restrict lateral movement. These components, alongside continuous compliance monitoring and risk management tools, orchestrate a multi-layered defense strategy that preempts cyber threats by dynamically adapting to the evolving security landscape and ensuring continuous protection against potential vulnerabilities and unauthorized access attempts.


Managing Digital Debt: Artificial Intelligence And Human Sustainability

Digital debt represents the time and energy spent managing digital tasks, impeding core job responsibilities. At the same time, while their employees are trying to manage digital communication and creative thinking, organizations are constantly chasing after cutting-edge software solutions to stay ahead in the competitive market. In that race, they are piling up their tech balance sheets with the lesser-known but omnipresent “technical debt.” ... Leaders face the daunting task of balancing short-term gains with long-term sustainability, promoting accountability and continuous improvement within their teams. Increasing digital debt hampers organizational agility, raises maintenance costs, heightens the risk of failures and diminishes employee morale, highlighting the imperative for effective leadership in managing debt accumulation. Rather than chasing the newest trends and platforms, leaders should focus on their employees and the ease of doing business not only for the customers but also employees. 


Enhancing Developer Experience for Creating AI Applications

Kuzniak mentioned that enhancing the developer experience is as crucial as improving user experience. Their goal is to eliminate any obstacles in the implementation process, ensuring a seamless and efficient development flow. They envisioned the ideal developer experience, focusing on simplicity and effectiveness: For the AI implementation, we’ve established key principles:Simplicity: enable implementation with just one line of code. Immediate Accessibility: allow real-time access to prompts without the need for deployment. Security and Quality: integrate security and quality management by design. Cost Efficiency: design cost management and thresholds into the system by default. Kuzniak mentioned that their organizational structures are evolving in the face of the technology landscapes. The traditional cross-functional teams comprising product managers, designers, and developers, while still relevant, may not always be the optimal setup for AI projects, as he explained: We should consider alternative organizational models.  


Code faster with generative AI, but beware the risks when you do

"Our experience is that [GenAI-powered] software coding tools aren't as security-aware and [attuned with] security coding practices," he said. For instance, developers who work for organizations in a regulated or data-sensitive environment may have to adhere to additional security practices and controls as part of their software delivery processes. Using a coding assistant can double productivity, but developers need to ask if they can adequately test the code and fulfill the quality requirements along the pipeline, he noted. It's a double-edged sword: Organizations must look at how GenAI can augment their coding practices so the products they develop are more secure, and -- at the same time -- how the AI brings added security risks with new attack vectors and vulnerabilities. Because it delivers significant scale, GenAI amplifies everything an organization does, including the associated risks, Shaw noted. A lot more code can be generated with it, which also means the number of potential risks increases exponentially.


It's the End of the Entrepreneurial Era As We Know It

Today, being an entrepreneur seems to be as easy as twiddling your thumbs and clicking (or swiping) on a few buttons on an app on a smartphone. Hard work? Unlikely! Just click the right settings or prompts and 'Voila!' let the machine do the hard work! Humans were born with the anatomy and physique to be hunters, gatherers, lumberjacks, climbers, and runners. We were blessed to be physically active and agile. Unfortunately, the human race has just been through an entire century of changing those mannerisms into becoming desk-bound, delivery-service complacent hermits. ... Is a person truly an entrepreneur, when all they did was click a button and the rest of it was automated? If they built the hardware, software, and automation themselves? Then in my eyes, it's clearly entrepreneurial. But if another created the machine and they used it, are they really an entrepreneur? Having produced and directed many TV shows exploring and exposing advanced tech and innovation positively, I am clearly bullish on our technologically supercharged future. 


'Architecture by conference' is a really bad idea

The role of a generative AI architect should go beyond merely applying existing technologies; it should involve pioneering new methodologies and pushing the boundaries of what’s possible. As leaders, we must foster a culture that not only encourages innovation but actively rewards it. Are we questioning established norms and continuously seeking opportunities to improve and innovate? Are we blindly following other people’s approaches to completely different business problems? It’s time to stop imitating architectural processes from hyperscaler conferences or reusing frameworks, spreadsheets, and slides developed for another project by whatever consulting firm. You need to get smart quickly and stop copying off other people’s papers. The journey toward exceptional generative AI architecture for use in or out of the cloud is challenging yet crucial. It requires a break from tradition, a commitment to deep customization, and a resolve to innovate. I wish I could tell you this is easy, but we’re about to embark on building core IT systems that will define the business’s value.


The slow burn of data egress fees

Despite the financial and technical barriers, some companies are undertaking “cloud repatriation,” where workloads are moved off the cloud and back on-premises. A UK-based study conducted by Citrix found that, of the 350 companies surveyed, a quarter have moved half or more of their cloud-based workloads back to their own infrastructure, or are considering doing so. Among the list of motivations, Citrix noted unexpected costs (33 percent of respondents), performance issues, security concerns, compatibility issues, and service downtime. Twenty-two percent listed financial concerns as the main motivation for repatriation. Fifty percent of respondents identified data transfer fees as a significant contributing factor to unexpected cloud costs. Omdia’s Hahn has seen more repatriation of late. “In the last year or two years, companies have been a lot more selective,” he says. “There used to be a trend of cloud first, cloud all in, but now it seems more like companies think, ‘okay, we’ve got these workloads, some of them make sense to go to the cloud, some make sense to go on-premise.’”


Coaching your IT team for change: 9 tips

“You need a clearly articulated ‘why,’” says Sharon Mandell, CIO of Juniper Networks. “Then you need to communicate, communicate, communicate. And if you think you’ve communicated enough, communicate some more.” People will still push back, she says. There will always be people who say, “We’ve always done it this way and it works just fine.” ... “Changes are most accepted if they are tied to mission and purpose,” says Jennifer Dulski, CEO and founder of software company Rising Team. “Every company has a vision, a mission, a set of values,” she says. If you tie the change to that, it won’t feel arbitrary or unnecessary. “Start by grounding it in your mission,” she says. “And be clear about how the benefits are tied to the mission.” Paulo Gardini Miguel, director of technology at The CTO Club, agrees. “Begin by painting the big picture,” he says. “Explain the rationale behind the change and demonstrate how it aligns with the organization’s goals. Highlight the benefits of the change for the team, the company, and the customers.” Whenever possible, leaders go deeper than the company’s stated mission, Dulski says.


Clean Data, Trusted Model: Ensure Good Data Hygiene for Your LLMs

“Garbage in, garbage out” has never rung truer than with LLMs. Just because you have vast troves of data to train a model doesn’t mean you should do so. Whatever data you use should have a reasonable and defined purpose. The fact is, some data is just too risky to input into a model. Some can carry significant risks, such as privacy violations or biases. It is crucial to establish a robust data sanitization process to filter out such problematic data points and ensure the integrity and fairness of the model’s predictions. In this era of data-driven decision-making, the quality and suitability of the inputs are just as vital as the sophistication of the models themselves. One method rising in popularity is adversarial testing on models. Just as selecting clean and purposeful data is vital for model training, assessing the model’s performance and robustness is equally crucial in the development and deployment stages. These evaluations help detect potential biases, vulnerabilities or unintended consequences that may arise from the model’s predictions. There’s already a growing market of startups specializing in providing services for precisely this purpose. 



Quote for the day:

''Leadership occurs any time you attempt to influence the thinking, development of beliefs of somebody else.'' -- Dr. Ken Blanchard

Daily Tech Digest - May 03, 2024

Multi-cloud, multi-headaches?

Achieving true cloud portability requires adhering to open standards, avoiding proprietary services, and abstracting away cloud-specific dependencies. This often involves refactoring or re-architecting existing applications, which can be a daunting task for banks with extensive legacy systems and monolithic architectures. Failure to properly plan for portability can lead to vendor lock-in, negating one of the key benefits of a multi-cloud strategy. Another challenge lies in managing and optimising costs across multiple cloud providers. Each provider has its own pricing models, billing mechanisms, and cost optimisation strategies, making it difficult to maintain a consistent and efficient cost management approach. Banks must invest in specialised tools and expertise to monitor and optimise cloud spending across their multi-cloud environment, ensuring they are not overpaying for resources or services. As I mentioned last week, this is a burgeoning area called FinOps. Security and compliance are also critical concerns in a multi-cloud world. Banks must ensure that their data and applications are protected according to industry standards and regulations, regardless of the cloud provider they are using. 


How to Streamline Your M&A Process with Active Directory Migration

In an enterprise setting, AD is a core component of managing user authentication and authorization—a dual function critical to maintaining secure and systematic access to company resources. Take, for example, a platform like SharePoint, which is deeply integrated with AD. SharePoint uses AD to authenticate user credentials and authorize different access levels to documents and collaborative spaces. The strategic realignment of AD during a merger ensures that SharePoint and similar applications continue to function seamlessly, following the established user permissions and security measures while maintaining the system’s integrity. AD consolidation is a strategic move toward streamlining company operations during a merger or acquisition. When you merge different AD environments, you reduce the complexity and the need for multiple teams to manage each system. It's like having a single remote control linked to all your devices instead of a different one for each. With this approach, there's less chance for disruptions and data, and companies can minimize downtime, preserve data integrity, and maintain compliance with regulatory requirements.



Dealing With Chaos: A Guide for Leaders Feeling Overwhelmed at Work

Being able to handle these scenarios requires the ability to take this blurry problem and turn it into one that can be solved. This can be rewarding, but often also involves very long-running, complex work that requires wrangling multiple stakeholders with differing motivations, long-running negotiations, and, somehow, getting a tangible deliverable out at the end — which can be months or even years out. As a result, leaders also lack the positive experience and dopamine hits that come with getting things done. Many leaders are in the middle of not just hierarchies, but also communication and context levels, translating higher-level messages from senior leaders for lower-level team members who require more clarity and want to understand what those messages mean for them. It’s taxing to constantly be in the middle, translating between higher and lower levels of the organization, making your boss care about issues they’re not super interested in, and helping teammates understand decisions from higher-up leaders. The tech industry, while fast-paced and innovative, often pressures leaders to adopt the mantras of, “Move fast, break stuff, do more with less and better to ask for forgiveness than for permission.” 


Now Is the Time for Executives to Deploy Ethical Rules Around AI

The evolution of technology is a runaway train. In many respects, the field of AI is in its adolescence, but in the past year alone advances in AI and other nascent technologies have continued at a breakneck pace. Globally, the public and private sector and academia are engaging in ongoing debates over the promise, peril, and appropriate uses of AI. As a result, we can expect 2024 to be a year of enhanced government regulation of the technology. I believe it will ultimately fall to governments to set standards and laws around what those parameters will be. But while governments will undoubtedly play a critical regulatory role, the speed of AI adoption will require that company executives create ethical guidelines of their own around AI. ... Making sure all AI tools are being formed using unbiased and balanced data will be of particular importance. Consider facial recognition technology powered by generative AI. If an AI dataset is trained to favor a certain type of ethnicity, unfair biases and outcomes will almost certainly be baked into the tool. Working to create AI tools that avoid these types of dangerous and inequitable outcomes will be critical to the technology’s continued development.


Superconducting "Islands" Could Lead to Magnetic Memory

The superconductor in question is yttrium barium copper oxide (YBCO), which acts as a superconductor below -181 °C—far from room temperature but positively balmy by the standards of most superconductors. When YBCO is chilled below that critical temperature, any magnetic field lines that pass through its interiorgets expelled to its exterior. However, researchers can still cast magnetic field lines throughthe YBCO’s interior byblasting it with a new external magnetic field, then quickly turning off that magnetic field. Researchers have found that if they layer a non-superconducting ferromagnetic material atop the YBCO, they can imprint those temporary magnetic patterns through the superconductor and “write” them into the ferromagnet. Then, the ferromagnet will hold this pattern even after the external magnetic field is switched off. This is an ability of great interest for spintronics, the discipline already investigating how to use these kinds of magnetic structures as computer memory instead of the CMOS transistors that underpin most of today’s electronics.


How to Strike a Balance Between Technology and Human Connection in the Workplace

Personal talk frequently goes by the wayside among remote teams. Rather than chalking this up to inevitability, create space at the beginning of Teams or Zoom meetings to discuss what's going on in everyone's lives. Setting aside five minutes of "family talk" to make way for 55 minutes of "business talk" is a decent trade-off. Or take a page from GitLab. They actively encourage new hires to schedule virtual coffee chats with colleagues during their onboarding to promote informal communication. During one-on-ones, the talk can be just as personalized. Leaders should remember that each one-on-one is about the employee rather than them. Therefore, the dedicated agenda should concentrate on the employee, not the supervisor. Concentrating on the employee's needs fosters a two-way dialogue. It also puts some of the responsibility on the employee to guide the talking points. Remember: Informal interactions can play critical roles in team and employee-leader bonding. These "water cooler conversations" don't always feel intuitive in a digital workspace, so they need to be intentionally allowed. 


Traditional corporate leadership structures are failing women in the C-suite

Perhaps the best solution to the decline in the C-suite, is women declining the C-suite. Women are rejecting patriarchal norms of rigidity, burnout, harassment, limited opportunity and unfair pay in what has been termed the “Great Breakup.” Women are more likely than men to leave their corporate jobs when their needs are not being met at work. In rejecting the C-suite model, women are calling for more power-balanced, equitable models of leadership that involve collaboration rather than domination — a model in which, as Gloria Steinem famously states, “we are linked, not ranked.” Likewise, in co-author Jennifer Walinga’s research on women entrepreneurs, women shared how leaving their corporate jobs to be an entrepreneur fulfilled their desires for a new universe where post-heroic, non-hierarchical leadership models can be enacted. RBC may have unveiled another possible solution to failed leadership parity that they call the “great wealth transfer” — a “seismic change” that is seeing wealth ownership transfer from men to women. In fact, it is estimated that, by 2028, women in Canada will control $4 trillion in assets — almost double the $2.2 trillion they control today.


SCCM Exploitation: Compromising Network Access Accounts

SCCM, or System Center Configuration Manager, is an administrative tool from Microsoft that helps with automating tasks such as deploying software, applying patches, and ensuring devices comply with organizational security policies—all from a central location. In a typical enterprise, SCCM is deployed with a primary site server at the core, coordinating with secondary servers and management points to manage client devices effectively. This setup allows for efficient control over an organization’s devices by defining network boundaries, grouping devices into collections, and assigning specific roles for tasks like software distribution. ... Perhaps an even more interesting avenue of attack is relaying the NTLM credentials of a machine account to the HTTP endpoint, registering a fake computer certificate with SCCM, and, in turn, using this fake registration to retrieve policy documents. ... This technique, or rather chain of techniques developed by Matt Creel, begins with low-privilege sock sessions. Many penetration testers can attest that sometimes after spoofing answers to LLMNR, NetBios Name Service, or mDNS requests, the only authenticated sessions that come in are low-privileged, non-admin sessions. 


Finserv State of Play: Perks of Open Source Collaboration

Indeed, even when banks and financial services providers do use open-source software, for Chris Howard, Head of the Open Source Program Office at EPAM, they do not necessarily leverage it transparently. He says: “Many traditional financial institutions are either still afraid to use open-source because they are still close-guarding their secrets, or –if they are using open-source – they are doing so in a very introverted way, consuming the technology but not contributing to the open source, blockchain-powered network. “As such, the main topic of consideration today is collaboration, and the need to come together to build data transparently and then for the industry to standardise these practices,” Howard adds. As not every financial institution leverages open-source software, and perhaps those that do are not open about their use of it, the first step to truly leverage the power of open-source technology is understanding the need to collaborate, with banks coming together to build data. There are some examples already of open-source collaboration across banking organisations, but it arguably does not happen enough, and, where it does, it is the neobanks and digital players that lead the charge.


World Password Day 2024: Are passwords here to stay?

A full-scale, password-based security strategy can be deployed across an enterprise of any size with zero additional hardware. Enterprise-grade hardware for fingerprint or facial recognition is expensive, and thus proves way tougher to scale without sapping the IT budget. Apart from that, adopting new forms of authentication involves a complete shift from the status quo, requiring employees to internalize such a drastic change through training, which a lack thereof would lead to difficulties in its implementation. Large-scale enterprises still use mainframe applications to store decades worth of data simply because it would cost more to transfer them to a different domain than it would to maintain the applications. Similarly, a full-scale enterprise-wide migration from passwords to other forms of authentication poses the risk of a complete financial misfire, even in a best case scenario. Besides, password-based authentication mechanisms, due to their long-standing presence, have the best tech support available, and almost all the problems that could arise when enterprise IT teams deal with passwords already have. 



Quote for the day:

"Develop success from failures. Discouragement and failure are two of the surest stepping stones to success." -- Dale Carnegie

Daily Tech Digest - May 02, 2024

3 Reasons Data Engineers Are the Unsung Heroes of GenAI

Ultimately, the success of GenAI depends on data quality. Without accurate, reliable data consistently made available to LLMs, even the most advanced models won’t produce useful outputs. Over the last five years, leading data engineers have adopted observability tooling — including automated monitoring and alerting, similar to DevOps observability software — to help improve data quality. Observability helps data teams monitor and proactively respond to incidents like failed Airflow jobs, broken APIs and misformatted third-party data that put data health at risk. And with end-to-end data lineage, teams gain visibility into upstream and downstream dependencies. Data engineers can provide transparency when observability tooling is applied across the modern AI stack, including vector databases. Lineage allows engineers to trace the source of the data as it’s converted to embeddings, then use that data to generate rich text that the LLM puts in front of the user. 


Why you’ll soon have a digital clone of your own

Far more interesting than an avatar that looks and acts like you is one that thinks and communicates like you — a virtual you with an AI brain for interacting with others on your behalf. Meta is working on tech called “Creator A.I.” that will enable real Instagram influencers to create fake digital AI versions of themselves to interact with fans through direct messages and comments. That initiative is a glimpse of the near future of business communication. We’ve been talking about “digital transformation” for a decade. But it’s only recently when that transformation involved digitizing ourselves. Businesses are now looking to embrace the concept of digital avatars for all the same reasons as other digital transformation initiatives: Higher productivity and lower costs. This process involves the cloning of existing people. Any day now, an industry will emerge where your face and body are scanned, your voice is recorded and your communications are fed into the system, so it knows how you use words. From that point, a virtual version of you can leave high-resolution video messages from a simple command you give to your AI glasses.


Euroviews: 'Quiet hiring' is silently revolutionising the workplace

Quiet hiring is gaining traction as a strategic alternative to traditional recruitment methods. Instead of constantly expanding the workforce through external hires, companies are increasingly focusing on upskilling and retraining the talent right under their noses, to meet evolving business needs. ... Quiet hiring also encourages a culture of continuous learning and development within the workforce, driving employees to proactively seek out and pursue professional growth opportunities. This is done by empowering existing talent, encouraging them to explore different roles and projects within the company, and investing in their development. By taking these steps, management teams can mitigate talent shortages, all while also driving employee loyalty, satisfaction, and retention. Tangible skills have never been more important. In fact, our latest research reveals that companies are integrating intentional learning with key skills like leadership, collaboration, and analytical thinking, aiming to both nurture professional growth and cultivate a supportive team culture.


Understanding emerging AI and data privacy regulations

The AI Act is the first comprehensive AI regulation adopted by EU lawmakers. It imposes obligations on both providers and deployers of AI systems, and specifically targets practices and systems that pose some risk to the health and safety or the fundamental rights of individuals, with the purpose of ensuring “a high level of protection of health, safety, fundamental rights (…), including democracy, the rule of law and environmental protection, against the harmful effects of AI systems.” ... The second category encompasses high-risk systems for which providers are subject to a whole set of requirements including risk management, data governance, record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity.  ... The third category encompasses certain AI systems that are subject to transparency requirements to make sure the public is in a position to recognize artificially generated or manipulated outputs. The fourth category encompasses other systems which are largely unregulated. Importantly, general-purpose AI models and systems are also in scope.


Indian SaaS Market: Good times ahead

There is a substantial demand for SaaS solutions from Small and Medium Businesses (SMBs) seeking to embrace digitisation and enhance their market reach and customer experience. SaaS companies must adopt a mobile-first strategy and tailor their go-to-market strategies to address the largely untapped SMB segment to increase adoption and ease of usage. In India, where personal connections play a significant role, human interaction will be crucial in selling the product. While horizontal SaaS solutions (e.g., CRM, HR, etc.) have traditionally dominated the market, there is a growing preference for vertical-specific solutions tailored to specific industries such as healthcare, education, finance, and logistics. Vertical SaaS companies stand to gain a loyal customer base by offering integrated solutions aligned with users’ systems and workflows. This trend is likely to gain further traction in 2024 as companies look for more specialised solutions. The emergence of Generative AI presents new opportunities and challenges for SaaS solutions providers. With a large pool of AI talent in India, SaaS companies can capitalise on this trend to develop innovative categories of services and revenue streams.


Maryland Legislature Passes State Privacy Bill With Robust Requirements And Broad Threshold For Application

MODPA prohibits the sale of sensitive data, which is a subset of personal data. A “sale of personal data” means the “exchange of personal data by a controller, a processor, or an affiliate of a controller or processor to a third party for monetary or other valuable consideration.” “Sensitive data” includes data revealing: racial or ethnic origin; religious beliefs; consumer health data; sex life; sexual orientation; status as transgender or nonbinary; national origin; and citizenship or immigration status. It also includes genetic and biometric data, personal data of a consumer who the controller knows or has reason to know is a child, and precise geolocation data. ... MODPA imposes guardrails with respect to the processing and sale of minors’ personal data. Controllers are prohibited from selling personal data of a consumer or using that data for purposes of targeted advertising if the controller knew or should have known that the consumer is under the age of 18. This prohibition is strict compared to other laws that require actual knowledge of consumers’ age or provide an opportunity for consumers to opt-in for the processing and sale of minors’ data.


Expectations vs. reality: A real-world check on generative AI

There are gen AI success stories in verticals like document engineering, where Docugami offers custom small language models that build a knowledge graph from a customer’s own complex documents, and can be used for both document generation and to extract data. And commercial insurance is a vertical Docugami CEO Jean Paoli says has been an early adopter, including statements of value, certificates of insurance, as well as policy documents with renewal dates, penalties, and liabilities. That’s critical information describing the risk of both individual customers and the entire portfolio, which has been difficult to manually extract and consolidate to use for generating new quotes, or representing the portfolio to reinsurers. “These are real scenarios that save you millions of dollars, not few hundred bucks,” Paoli says. Like everyone else, large Docugami customers created gen AI committees and started pilots in 2023, but many have already moved from discovery to implementation, starting production deployments at least six months ago and seeing real returns, chief business officer Alan Yates says. 


9 Steps Towards an Agile Architecture

The first step in breaking these limiting beliefs is to gain consensus that the system will be released in a series of increments. The essence of the Minimum Viable Product (MVP) approach is that each product increment will attempt to deliver at least one outcome to at least a subset of the users of the system. The organization will measure its success in achieving these outcomes and use that feedback to decide what it should work on next. Using an MVP/MVA approach changes the conversations that a development team and its stakeholders have. ... Learning to develop architecture incrementally takes practice, and no team is going to master it immediately. Most teams lean toward developing more architecture than they need, only to find that some of what they did was wrong, or at least unnecessary. They will probably struggle to balance trade-offs. They will also probably learn that some of the choices they made are wrong. All of this is good. They have to learn the right balance on their own, and they can only do this by making some wrong choices and seeing the impact.


Tracking Data Breaches: Targeting of Vulnerabilities Surges

To penetrate corporate networks, hackers most often gained unauthorized access to a victim's web applications, including cloud-based email or collaboration tools, Verizon said. The next most common ways that attackers broke in involved phishing emails, followed by exploiting vulnerabilities in web applications, stealing credentials for desktop sharing software, and using stolen VPN credentials. For the first time, Verizon's annual report counts software vulnerabilities - such as the widely exploited MOVEit bug - as a supply chain security metric. "We want to give organizations the ability to see what they can effect in their environment by choosing vendors who follow 'secure by design' practices," Widup said. The Clop campaign targeting MOVEit was notable in part because it seemed to run for only two or three days. While vendor Progress Software quickly issued a patch, the latest count from security firm Emsisoft is that Clop's blitzkrieg tactics directly or indirectly affected 2,770 organizations and exposed data pertaining to 95 million individuals.


How to Manage a Rapidly Growing IT Team

The key to keeping pace with rapid growth lies in structuring an IT team that embraces new technology and training to stay current, says Diane Rafferty, managing director, national technology group, at employment placement firm Atrium, via email. "If you don't offer training on the most up-to-date technologies, your team will have to do more with less." Leadership must clearly define expectations, yet leave room for movement, Velco says. "Autonomy inspires ownership of roles and collaborative problem-solving beyond defined bounds," he explains. Structures alone don't scale, Velco notes, but processes do. "Implementing agile frameworks like Scrum or Kanban encourages frequent reassessment and adjustment." When it comes to preparing for team growth, Daniele Gemignani, CTO at no-code business process automation solution provider Pipefy, is a strong believer in mentorship and leadership development programs. "For teams with a mix of seniority levels, pairing junior members with more experienced mentors can accelerate their development and readiness to take on more complex tasks or leadership roles," she says



Quote for the day:

"Don’t be distracted by criticism. Remember — the only taste of success some people get is to take a bite out of you." -- Zig Ziglar