Showing posts with label VPN. Show all posts
Showing posts with label VPN. Show all posts

Daily Tech Digest - February 11, 2025


Quote for the day:

"Your worth consists in what you are and not in what you have." -- Thomas Edison


Protecting Your Software Supply Chain: Assessing the Risks Before Deployment

Given the vast number of third-party components used in modern IT, it's unrealistic to scrutinize every software package equally. Instead, security teams should prioritize their efforts based on business impact and attack surface exposure. High-privilege applications that frequently communicate with external services should undergo product security testing, while lower-risk applications can be assessed through automated or less resource-intensive methods. Whether done before deployment or as a retrospective analysis, a structured approach to PST ensures that organizations focus on securing the most critical assets first while maintaining overall system integrity. ... While Product Security Testing will never prevent a breach of a third party out of your control, it is necessary to allow organizations to make informed decisions about their defensive posture and response strategy. Many organizations follow a standard process of identifying a need, selecting a product, and deploying it without a deep security evaluation. This lack of scrutiny can leave them scrambling to determine the impact when a supply chain attack occurs. By incorporating PST into the decision-making process, security teams gain critical documentation, including dependency mapping, threat models, and specific mitigations tailored to the technology in use. 


Google’s latest genAI shift is a reminder to IT leaders — never trust vendor policy

Entities out there doing things you don’t like are always going to be able to get generative AI (genAI) services and tools from somebody. You think large terrorist cells can’t use their money to pay somebody to craft LLMs for them? Even the most powerful enterprises can’t stop it from happening. But, that may not be the point. Walmart, ExxonMobil, Amazon, Chase, Hilton, Pfizer and Toyota and the rest of those heavy-hitters merely want to pick and choose where their monies are spent. Big enterprises can’t stop AI from being used to do things they don’t like, but they can make sure none of it is being funded with their money. If they add a clause to every RFP that they will only work with model-makers that agree to not do X, Y, or Z, that will get a lot of attention. The contract would have to be realistic, though. It might say, for instance, “If the model-maker later chooses to accept payments for the above-described prohibited acts, they must reimburse all of the dollars we have already paid and must also give us 18 months notice so that we can replace the vendor with a company that will respect the terms of our contracts.” From the perspective of Google, along with Microsoft, OpenAI, IBM, AWS and others, the idea is to take enterprise dollars on top of government contracts. 


Is Fine-Tuning or Prompt Engineering the Right Approach for AI?

It’s not just about having access to GPUs — it’s about getting the most out of proprietary data with new tools that make fine-tuning easier. Here’s why fine-tuning is gaining traction:Better results with proprietary data: Fine-tuning allows businesses to train models on their own data, making the AI much more accurate and relevant to their specific tasks. This leads to better outcomes and real business value. Easier than ever before: Tools like Hugging Face’s Open Source libraries, PyTorch and TensorFlow, along with cloud services, have made fine-tuning more accessible. These frameworks simplify the process, even for teams without deep AI expertise. Improved infrastructure: The rising availability of powerful GPUs and cloud-based solutions has made it much easier to set up and run fine-tuning at scale. While fine-tuning opens the door to more customized AI, it does require careful planning and the right infrastructure to succeed. ... As enterprises accelerate their AI adoption, choosing between prompt engineering and fine-tuning will have a significant impact on their success. While prompt engineering provides a quick, cost-effective solution for general tasks, fine-tuning unlocks the full potential of AI, enabling superior performance on proprietary data.


Shifting left without slowing down

On the one hand, automation enabled by GenAI tools in software development is driving unprecedented developer productivity, further emphasizing the gap created by manual application security controls, like security reviews or threat modeling. But in parallel, recent advancements in code understanding enabled by these technologies, together with programmatic policy-as-code security policies, enable a giant leap in the value security automation can bring. ... The first step is recognizing security as a shared responsibility across the organization, not just a specialized function. Equipping teams with automated tools and clear processes helps integrate security into everyday workflows. Establishing measurable goals and metrics to track progress can also provide direction and accountability. Building cross-functional collaboration between security and development teams sets the foundation for long-term success. ... A common pitfall is treating security as an afterthought, leading to disruptions that strain teams and delay releases. Conversely, overburdening developers with security responsibilities without proper support can lead to frustration and neglect of critical tasks. Failure to adopt automation or align security goals with development objectives often results in inefficiency and poor outcomes. 


How To Approach API Security Amid Increasing Automated Attack Sophistication

We’ve now gone from ‘dumb’ attacks—for example, web-based attacks focused on extracting data from third parties and on a specific or single vulnerability—to ‘smart’ AI-driven attacks often involving picking an actual target, resulting in a more focused attack. Going after a particular organization, perhaps a large organization or even a nation-state, instead of looking for vulnerable people is a significant shift. The sophistication is increasing as attackers manipulate request payloads to trick the backend system into an action. ... Another element of API security is being aware of sensitive data. Personal Identifiable Information (PII) is moving through APIs constantly and is vulnerable to theft or data exfiltration. Organizations do not often pay attention to vulnerabilities. Still, they pay attention when the result is damage to their organization through leaked PII, stolen finances, or brand reputation. ... The security teams know the network systems and the infrastructure well but don't understand the application behaviors. The DevOps team tends to own the applications but doesn’t see anything in production. This split boundary in most organizations makes it ripe for exploitation. Many data exfiltration cases fall in this no man’s land since an authenticated user executes most incidents.


Top 5 ways attackers use generative AI to exploit your systems

Gen AI tools help criminals pull together different sources of data to enrich their campaigns — whether this is group social profiling, or targeted information gleaned from social media. “AI can be used to quickly learn what types of emails are being rejected or opened, and in turn modify its approach to increase phishing success rate,” Mindgard’s Garraghan explains. ... The traditionally difficult task of analyzing systems for vulnerabilities and developing exploits can be simplified through use of gen AI technologies. “Instead of a black hat hacker spending the time to probe and perform reconnaissance against a system perimeter, an AI agent can be tasked to do this automatically,” Mingard’s Garraghan says. ... “This sharp decrease strongly indicates that a major technological advancement — likely GenAI — is enabling threat actors to exploit vulnerabilities at unprecedented speeds,” ReliaQuest writes. ... Check Point Research explains: “While ChatGPT has invested substantially in anti-abuse provisions over the last two years, these newer models appear to offer little resistance to misuse, thereby attracting a surge of interest from different levels of attackers, especially the low skilled ones — individuals who exploit existing scripts or tools without a deep understanding of the underlying technology.”


Why firewalls and VPNs give you a false sense of security

VPNs and firewalls play a crucial role in extending networks, but they also come with risks. By connecting more users, devices, locations, and clouds, they inadvertently expand the attack surface with public IP addresses. This expansion allows users to work remotely from anywhere with an internet connection, further stretching the network’s reach. Moreover, the rise of IoT devices has led to a surge in Wi-Fi access points within this extended network. Even seemingly innocuous devices like Wi-Fi-connected espresso machines, meant for a quick post-lunch pick-me-up, contribute to the proliferation of new attack vectors that cybercriminals can exploit. ... More doesn’t mean better when it comes to firewalls and VPNs. Expanding a perimeter-based security architecture rooted in firewalls and VPNs means more deployments, more overhead costs, and more time wasted for IT teams – but less security and less peace of mind. Pain also comes in the form of degraded user experience and satisfaction with VPN technology for the entire organization due to backhauling traffic. Other challenges like the cost and complexity of patch management, security updates, software upgrades, and constantly refreshing aging equipment as an organization grows are enough to exhaust even the largest and most efficient IT teams.


Building Trust in AI: Security and Risks in Highly Regulated Industries

AI hallucinations have emerged as a critical problem, with systems generating plausible but incorrect information - for instance, AI fabricated software dependencies, such as PyTorture, leading to potential security risks. Hackers could exploit these hallucinations by creating malicious components masquerading as real ones. In another case, an AI libelously fabricated an embezzlement claim, resulting in legal action - marking the first time AI was sued for libel. Security remains a pressing concern, particularly with plugins and software supply chains. A ChatGPT plugin once exposed sensitive data due to a flaw in its OAuth mechanism, and incidents like PyTorch’s vulnerable release over Christmas demonstrate the risks of system exploitation. Supply chain vulnerabilities affect all technologies, while AI-specific threats like prompt injection allow attackers to manipulate outputs or access sensitive prompts, as seen in Google Gemini. ... Organizations can enhance their security strategies by utilizing frameworks like Google’s Secure AI Framework (SAIF). These frameworks highlight security principles, including access control, detection and response systems, defense mechanisms, and risk-aware processes tailored to meet specific business needs.


When LLMs become influencers

Our ability to influence LLMs is seriously circumscribed. Perhaps if you’re the owner of the LLM and associated tool, you can exert outsized influence on its output. For example, AWS should be able to train Amazon Q to answer questions, etc., related to AWS services. There’s an open question as to whether Q would be “biased” toward AWS services, but that’s almost a secondary concern. Maybe it steers a developer toward Amazon ElastiCache and away from Redis, simply by virtue of having more and better documentation and information to offer a developer. The primary concern is ensuring these tools have enough good training data so they don’t lead developers astray. ... Well, one option is simply to publish benchmarks. The LLM vendors will ultimately have to improve their output or developers will turn to other tools that consistently yield better results. If you’re an open source project, commercial vendor, or someone else that increasingly relies on LLMs as knowledge intermediaries, you should regularly publish results that showcase those LLMs that do well and those that don’t. Benchmarking can help move the industry forward. By extension, if you’re a developer who increasingly relies on coding assistants like GitHub Copilot or Amazon Q, be vocal about your experiences, both positive and negative. 


Deepfakes: How Deep Can They Go?

Metaphorically, spotting deepfakes is like playing the world’s most challenging game of “spot the difference.” The fakes have become so sophisticated that the inconsistencies are often nearly invisible, especially to the untrained eye. It requires constant vigilance and the ability to question the authenticity of audiovisual content, even when it looks or sounds completely convincing. Recognizing threats and taking decisive actions are crucial for mitigating the effects of an attack. Establishing well-defined policies, reporting channels, and response workflows in advance is imperative. Think of it like a citywide defense system responding to incoming missiles. Early warning radars (monitoring) are necessary to detect the threat; anti-missile batteries (AI scanning) are needed to neutralize it; and emergency services (incident response) are essential to quickly handle any impacts. Each layer works in concert to mitigate harm. ... If a deepfake attack succeeds, organizations should immediately notify stakeholders of the fake content, issue corrective statements, and coordinate efforts to remove the offending content. They should also investigate the source, implement additional verification measures, and provide updates to rebuild trust and consider legal action. 


Daily Tech Digest - December 12, 2024

The future of AI regulation is up in the air: What’s your next move?

The problem is, Jones says, is that lack of regulations boils down to a lack of accountability, when it comes to what your large language models are doing — and that includes hoovering up intellectual property. Without regulations and legal ramifications, resolving issues of IP theft will either boil down to court cases, or more likely, especially in cases where the LLM belongs to a company with deep pockets, the responsibility will slide downhill to the end users. And when profitability outweighs the risk of a financial hit, some companies are going to push the boundaries. “I think it’s fair to say that the courts aren’t enough, and the fact is that people are going to have to poison their public content to avoid losing their IP,” Jones says. “And it’s sad that it’s going to have to get there, but it’s absolutely going to have to get there if the risk is, you put it on the internet, suddenly somebody’s just ripped off your entire catalog and they’re off selling it directly as well.” ... “These massive weapons of mass destruction, from an AI perspective, they’re phenomenally powerful things. There should be accountability for the control of them,” Jones says. “What it will take to put that accountability onto the companies that create the products, I believe firmly that that’s only going to happen if there’s an impetus for it.”


Leading VPN CCO says digital privacy is "a game of chess we need to play"

Sthanu calls VPNs a first step, and puts forward secure browsers as a second. IPVanish recently launched a secure browser, which is an industry first, and something not offered by other top VPN providers. "It keeps your browser private, blocking tracking, encrypting the sessions, but also protecting your device from any malware," Sthanu said. IPVanish's secure browser utilises the cloud. Session tracking, cookies, and targeting are all eliminated, as web browsing operates in a cloud sandbox. ... Encrypting your data is a vital part of what VPNs do. AES 256-bit and ChaCha20 encryption are currently the standards for the most secure VPNs, which do an excellent job at encrypting and protecting your data. These encryption ciphers can protect you against the vast majority of cyber threats out there right now – but as computers and threats develop, security will need to develop too. Quantum computers are the next stage in computing evolution, and there will come a time, predicted to be in the next five years, where these computers can break 256-bit encryption – this is being referred to as "Q-day." Quantum computers are not readily available at this moment in time, with most found in universities or research labs, but they will become more widespread. 


Kintsugi Leaders: Conservers of talent who convert the weak into winners

Profligate Leaders are not only gluttonous in their appetite for consuming resources, they are usually also choosy about the kind they will order. Not for them the tedious effort of using their own best-selling, training cookbook for seasoning and stirring the youth coming out from the country’s stretched and creaking educational system. On the contrary, they push their HR to queue up for ready-cooked candidates outside the portals of elite institutes on day zero. ... Kintsugi Leaders can create nobility of a different kind if they follow three precepts. The central one is the willingness to bet big and take risks on untried talent. In one of my Group HR roles, eyebrows were raised when I placed the HR leadership of large businesses in the hands of young, internally groomed talent instead of picking stars from the market. ... There is a third (albeit rare) kind of HR leader: the Trusted Transformer who can convert a Profligate Leader into the Kintsugi kind. Revealable corporate examples are thin on the ground. In keeping with the Kintsugi theme, then, I have to fall back on Japan. Itō Hirobumi had a profound influence on Emperor Meiji and played a pivotal role in shaping the political landscape of Meiji-era Japan.


4 North Star Metrics for Platform Engineering Teams

“Acknowledging that DORA, SPACE and DevEx provide different slivers or different perspectives into the problem, our goal was to create a framework that encapsulates all the frameworks,” Noda said, “like one framework to rule them all, that is prescriptive and encapsulates all the existing knowledge and research we have.” DORA metrics don’t mean much at the team level, but, he continued, developer satisfaction — a key measurement of platform engineering success — doesn’t matter to a CFO. “There’s a very intentional goal of making especially the key metrics, but really all the metrics, meaningful to all stakeholders, including managers,” Noda said. “That enables the organization to create a single, shared and aligned definition of productivity so everyone can row in the same direction.” The Core 4 key metrics are:An average of diffs per engineer is used to measure speed. The Developer Experience Index, or homegrown developer experience surveys, is used to measure effectiveness. A change failure rate is used to measure quality. The percentage of time spent on new capabilities to measure impact. DX’s own DXI, which uses a standardized set of 14 Likert-scale questions — from strongly agree to strongly disagree — is currently only available to DX users.


The future of data: A 5-pillar approach to modern data management

To succeed in today’s landscape, every company — small, mid-sized or large — must embrace a data-centric mindset. This article proposes a methodology for organizations to implement a modern data management function that can be tailored to meet their unique needs. By “modern”, I refer to an engineering-driven methodology that fully capitalizes on automation and software engineering best practices. This approach is repeatable, minimizes dependence on manual controls, harnesses technology and AI for data management and integrates seamlessly into the digital product development process. ... Unlike the technology-focused Data Platform pillar, Data Engineering concentrates on building distributed parallel data pipelines with embedded business rules. It is crucial to remember that business needs should drive the pipeline configuration, not the other way around. For example, if preserving the order of events is essential for business needs, the appropriate batch, micro-batch or streaming configuration must be implemented to meet these requirements. Another key area involves managing the operational health of data pipelines, with an even greater emphasis on monitoring the quality of the data flowing through the pipeline. 


How and Why the Developer-First Approach Is Changing the Observability Landscape

First and foremost, developers aim to avoid issues altogether. They seek modern observability solutions that can prevent problems before they occur. This goes beyond merely monitoring metrics: it encompasses the entire software development lifecycle (SDLC) and every stage of development within the organization. Production issues don't begin with a sudden surge in traffic; they originate much earlier when developers first implement their solutions. Issues begin to surface as these solutions are deployed to production and customers start using them. Observability solutions must shift to monitoring all the aspects of SDLC and all the activities that happen throughout the development pipeline. This includes the production code and how it’s running, but also the CI/CD pipeline, development activities, and every single test executed against the database. Second, developers deal with hundreds of applications each day. They can’t waste their time manually tuning alerting for each application separately. The monitoring solutions must automatically detect anomalies, fix issues before they happen, and tune the alarms based on the real traffic. They shouldn’t raise alarms based on hard limits like 80% of the CPU load.


We must adjust expectations for the CISO role

The sense of vulnerability CISOs feel today is compounded by a shifting accountability model in the boardroom. As cybersecurity incidents make front-page news more frequently, boards and executive teams are paying closer attention. This increased scrutiny is a double-edged sword: on the one hand, it can mean greater support and resources; on the other, it often translates to CISOs being in the proverbial hot seat. What’s more, cybersecurity is still a rapidly evolving field with few long-standing best practices. It’s a space marked by constant adaptation, bringing a certain degree of trial and error. When an error occurs—especially one that leads to a breach—the CISO’s role is scrutinized. While the entire organization might have a role in cybersecurity, CISOs are often expected to bear the brunt of accountability. This dynamic is unsettling for many in the position, and the 99% of CISOs who fear for their job security in the event of a breach clearly illustrates this point. So, what can be done? Both organizations and CISOs are responsible for recalibrating expectations and addressing the root causes of these pervasive job security fears. For organizations, a starting point is to shift cybersecurity from a reactive to a proactive stance. Investing in continuous improvement—whether through advanced security technologies, employee training, or cyber insurance—is crucial.


Bug bounty programs can deliver significant benefits, but only if you’re ready

The most significant benefit of a bug bounty program is finding vulnerabilities an organization might not have otherwise discovered. “A bug bounty program gives you another avenue of identifying vulnerabilities that you’re not finding through other processes,” such as internal vulnerability scans, Stefanie Bartak, associate director of the vulnerability management team at NCC Group, tells CSO. Establishing a bug bounty program signals to the broader security research community that an organization is serious about fixing bugs. “For an enterprise, it’s a really good way for researchers, or anyone, to be able to contact them and report something that may not be right in their security,” Louis Nyffenegger, CEO of PentesterLab, tells CSO. Moreover, a bug bounty program will offer an organization a wider array of talent to bring perspectives that in-house personnel don’t have. “You get access to a large community of diverse thinkers, which help you find vulnerabilities you may otherwise not get good access to,” Synack’s Lance says. “That diversity of thought can’t be underestimated. Diversity of thought and diversity of researchers is a big benefit. You get a more hardened environment because you get better or additional testing in some cases.”


Harnessing SaaS to elevate your digital transformation journey

The impact of AI-driven SaaS solutions can be seen across multiple industries. In retail, AI-powered SaaS platforms enable businesses to analyze consumer behavior in real-time, providing personalized recommendations that drive sales. In manufacturing, AI optimizes supply chain management, reducing waste and increasing productivity. In the finance sector, AI-driven SaaS automates risk assessment, improving decision-making and reducing operational costs. ... As businesses continue to adopt SaaS and AI-driven solutions, the future of digital transformation looks promising. Companies are no longer just thinking about automating processes or improving efficiency, they are investing in technologies that will help them shape the future of their industries. From developing the next generation of products to understanding their customers better, SaaS and AI are at the heart of this evolution. CTOs, like myself, are now not only responsible for technological innovation but are also seen as key contributors to shaping the company’s overall business strategy. This shift in leadership focus will be critical in helping organizations navigate the challenges and opportunities of digital transformation. By leveraging AI and SaaS, we can build scalable, efficient, and innovative systems that will drive growth for years to come. 


What makes product teams effective?

More enterprises are adopting a cross-functional team model, yet many still tend to underinvest in product management. While they make sure to fill the product owner role—a person accountable for translating business needs into technology requirements—they do not always choose the right individual for the product manager role. Effective product managers are business leaders with the mindset and technical skills to guide multiple product teams simultaneously. They shape product strategy, define requirements, and uphold the bar on delivery quality, usually partnering with an engineering or technology lead in a two-in-a-box model. ... Unsurprisingly, when organizations recognize individual expertise, provide options for career progression, and base promotions on capabilities, employees are more engaged and satisfied with their teams. Similarly, by standardizing and reducing the overall number of roles, organizations naturally shift to a balanced ratio of orchestrators (minority) to doers (majority), which increases team capacity without hiring more employees. This shift helps ensure teams can meet their delivery commitments and creates a transparent environment where individuals feel empowered and informed.



Quote for the day:

“Things come to those who wait, but only the things left by those who hustle” -- Abraham Lincoln

Daily Tech Digest - May 05, 2024

Building Resilient and Secure FinTech Infrastructure: Strategies for DDoS Mitigation and Cybersecurity

Companies may have to pay anything from $1 million to over $5 million for every hour of downtime, not to mention any further fines, fees, or penalties under the law. It is in addition to higher DDoS security investments made after the fact and higher cyber insurance premiums. As a result, DDoS victims may have to pay for ransomware. However, this isn’t a fix. It does not ensure that a DDoS assault won’t occur again. FinTech businesses need to be proactive if they want total DDoS resilience. Regardless of the security services they use, organizations are extremely susceptible to denial-of-service (DDoS) assaults. The only way to withstand such attacks is to implement non-disruptive DDoS testing and obtain uninterrupted and comprehensive insight into the DDoS security posture. Continuous DDoS testing on live settings is necessary for FinTech firms and their DDoS protection vendors to identify vulnerabilities, prioritize remediation, and ensure that the solutions are applied appropriately. Staying ahead of the threat curve means taking a preventive rather than a reactive approach to safeguarding online services against DDoS attacks.


Data Governance Act: Understanding the Cross-Sectoral Instrument

The Data Governance Act sets out guidelines for the utilization within the EU for the data held by the public sector entities. This data is secured and protected for reasons such as commercial confidentiality, third-party intellectual property rights and the protection of personal data. While the Act does not explicitly detail the circumstances under which it is applied to the foreign organizations, there are several provisions which imply its extraterritorial implications. Any non-EU entity which provides services within the European Union and qualifies as a data altruism company or intermediary must appoint a legal representative in every member state it is operating in. ... The Act complements the Open Data Directive by addressing the re-use of the secured data which is not under the functions of the latter. It establishes several secured walls for the re-utilization of such data held by the public sector bodies including the governmental and other public sector entities. The Act does not obligate the public sector bodies to permit the reuse of data but sets conditions for any such authorization. 


Six Data Quality Dimensions to Get Your Data AI-Ready

Compliance: The degree to which data is in accordance with laws, regulations, or standards. How is the use of your data now changing? Do you need to uphold your data to higher standards and different requirements in these new use cases? Consider also: A true disruptor like GenAI may result in the need for new policy, and therefore, new regulation. Can you anticipate future regulation around AI usage given your industry and data? ... Accessibility: The ease with which data can be consulted or retrieved. Scalability and repeatability require consistent accessibility. Is your data reliably accessible by the right people and technologies? How accessible must it be? If your data was temporarily inaccessible, how damaging would that be? What is the acceptable threshold that would ensure your project will succeed? Access Security: The degree to which access to datasets is restricted. Consider the privileges and permissions to your data and the implications. Are you building an AI tool in-house or are you using a service? Which of your company’s data are you willing to provide third party access to? Ensure that you are not sharing data that you cannot or should not share. 


Want to drive more secure GenAI? Try automating your red teaming

When red-teaming GenAI, manual probing is a time-intensive but necessary part of identifying potential security blind spots. However, automation can help scale your GenAI red teaming efforts by automating routine tasks and identifying potentially risky areas that require more attention. At Microsoft, we released the Python Risk Identification Tool for generative AI (PyRIT)—an open-access framework designed to help security researchers and ML engineers assess the robustness of their LLM endpoints against different harm categories such as fabrication/ungrounded content like hallucinations, misuse issues like machine bias, and prohibited content such as harassment. PyRIT is battle-tested by the Microsoft AI Red Team. It started off as a set of one-off scripts as we began red teaming GenAI systems in 2022, and we’ve continued to evolve the library ever since. Today, PyRIT acts as an efficiency gain for the Microsoft AI Red Team—shining a light on risk hot spots so that security professionals can then explore them. This allows the security professional to retain control of the AI red team strategy and execution. 


A Novel AI Approach to Enhance Language Models: Multi-Token Prediction

The researchers behind this study propose a new technique called multi-token prediction. Instead of predicting one token (word) at a time, this method trains the model to predict multiple future tokens simultaneously. Imagine it like this: While learning a language, instead of guessing one word at a time, you’re challenged to predict entire phrases or even sentences. Sounds intriguing, right? So, how does this multi-token prediction work? The researchers designed a model architecture with a shared trunk that produces a latent representation of the input context. This shared trunk is then connected to multiple independent output heads, each responsible for predicting one of the future tokens. For example, if the model is set to predict four future tokens, it will have four output heads working in parallel. During training, the model is fed a text corpus, and at each position, it is tasked with predicting the next n tokens simultaneously. This approach encourages the model to learn longer-term patterns and dependencies in the data, potentially leading to better performance, especially for tasks that require understanding the broader context.


Uncomplicating the complex: How Spanner simplifies microservices-based architectures

Sharding is a powerful tool for database scalability. When implemented correctly, it can enable applications to handle a much larger volume of read and write transactions. However, sharding does not come without its challenges and brings its own set of complexities that need careful navigation. ... Over time, database complexity can grow along with increased traffic, adding further toil to operations. For large systems, a combination of sharding along with attached scale-out read replicas might be required to help ensure cost-effective scalability and performance. This combined dual-strategy approach, while effective in handling increasing traffic, significantly ramps up the complexity of the system's architecture. The above illustration captures the need to add scalability and availability to a transactional relational database powering a service. ... We want to emphasize that we’re not arguing that Spanner is only a good fit for microservices. All the things that make Spanner a great fit for microservices also make it great for monolithic applications.


VPNs aren't invincible—5 things a VPN can't protect you from

A VPN can deter a hacker from trying to intercept your internet traffic, but it cannot prevent you from landing on a scam website yourself or sharing your personal details with someone on the web. Also, thanks to AI-powered tools, attackers can craft increasingly convincing messages at high speed. This means phishing attacks will keep happening in the future. The good news is that, despite how well-made the messages are, you can always spot a scam. As a rule of thumb, if something is too good to be true it generally is—so, beware of grand promises. ... While a VPN keeps you more anonymous online, preventing some forms of tracking, it only works at a network level. Tracking cookies, though, are stored directly on your web browser. Hence, VPNs aren't much of a help against such trackers. To mitigate the risks, I recommend clearing the internet cookies on your devices on a regular basis. ... As we have seen, VPNs are not a magic wand that'll magic away cyber threats and danger. Nonetheless, this software still protects you from a great deal of risks and strongly enhances your digital posture—so, all in all, VPNs are still vital pieces of security equipment.


Top Digital Transformation Themes and EA Strategies

Enterprise Architects (EAs) face significant challenges in engaging business stakeholders. One of the main difficulties is shifting the perception of the EA platform from a tool imposed by the EA team to a collaborative instrument that benefits the entire business. EA teams also have a tendency to focus inwardly, leaning on technical architectural terminology and objectives that other business units may not understand or resonate with. ... Architects often struggle to communicate the ROI of Enterprise Architecture or link architectural initiatives and value directly to the business's overarching goals and metrics. A common hurdle is the traditional IT-centric approach of EA, which may not align with the dynamic needs of the business. ... The segregation of data into siloes hampers transparency and makes it difficult to achieve a comprehensive overview of the organization’s data landscape. So when architects try to bring all this information together, the complexity often results in bottlenecks as they struggle to manage and govern IT effectively. This severely limits the potential for insights and efficiency across the organization. 


How To Build a Scalable Platform Architecture for Real-Time Data

Many platforms enable autoscaling, like adjusting the number of running instances based on CPU usage, but the level of automation varies. Some platforms offer this feature inherently, while others require manual configuration, like setting the maximum number of parallel tasks or workers for each job. During deployment, the control plane provides a default setting based on anticipated demand but continues to closely monitor metrics. It then scales up the number of workers, tasks or instances, allocating additional resources to the topic as required. ... Enterprises prioritize high availability, disaster recovery and resilience to maintain ongoing operations during outages. Most data-streaming platforms already have robust guardrails and deployment strategies built in, primarily by extending their cluster across multiple partitions, data centers and cloud-agnostic availability zones. However, it involves trade-offs like increased latency, potential data duplication and higher costs. Here are some recommendations when planning for high availability, disaster recovery and resilience.


How will AI help democratise intelligence in algorithmic trading? Here are 5 ways

AI has played a significant role in making algorithmic trading more accessible to the masses. The emergence of open AI sources like ChatGPT, and open source AI models like Llama 3 has made AI technology available to anyone with an internet connection at a mere $20 a month or so cost. This has led traders to gain access to algorithms, creation of algorithms, that were previously only available to large institutions. Additionally, many platforms have made it possible for traders to automate their trades without requiring coding knowledge, thanks to the contribution of AI in algo trading. ... Algorithms powered by market research and AI can significantly reduce trading errors caused by emotions and impulsive decisions. Traditional trading methods rely heavily on expertise, intuition, and precision, whereas AI-powered algorithms eliminate the need for these factors and enhance the accuracy, efficiency, and overall performance of trades. These algorithms can be customised to fit various market situations, whether it's a stable or volatile market.



Quote for the day:

"Courage doesn't mean you don't get afraid. Courage means you don't let fear stop you." -- Bethany Hamilton

Daily Tech Digest - May 01, 2024

4 reasons firewalls and VPNs are exposing organizations to breaches

Unfortunately, the attack surface problems of perimeter-based architectures go well beyond the above, and that is because of firewalls and VPNs. These tools are the means by which castle-and-moat security models are supposed to defend hub-and-spoke networks but using them has unintended consequences. Firewalls and VPNs have public IP addresses that can be found on the public internet. This is by design so that legitimate, authorized users can access the network via the web, interact with the connected resources therein and do their jobs. However, these public IP addresses can also be found by malicious actors who are searching for targets that they can attack in order to gain access to the network. In other words, firewalls and VPNs give cybercriminals more attack vectors by expanding the organization’s attack surface. Ironically, this means that the standard strategy of deploying additional firewalls and VPNs to scale and improve security actually exacerbates the attack surface problem further. Once cybercriminals have successfully identified an attractive target, they unleash their cyberattacks in an attempt to penetrate the organization’s defenses. 


CI/CD Observability: A Rich New Opportunity for OpenTelemetry

We deploy things, we see things catch on fire and then we try to mitigate the fire. But if we only observe the latest stages of the development and deployment cycle, it’s too late. We don’t know what happened in the build phase or the test phase, or we have difficulty in root cause analysis or due to increases in mean time to recovery, and also due to missed optimization opportunities. We know our CI pipelines take a long time to run, but we don’t know what to improve if we want to make them faster. If we shift our observability focus to the left, we can address issues before they escalate, enhance efficiency by cutting problems in the process, increase the robustness and integrity of our tests, and minimize costs and expenses related to post-deployment and downtime. ... This is a really exciting time for the observability community. By getting data out of our CIs and integrating it with observability systems, we can trace back to the logs in builds, and see important information — like when the first time something failed was — from our CI. From there, we can find out what’s producing errors, in a way that’s much better pinpointed to the exact time of their origin.


Cyber breach misinformation creates a haze of uncertainty

Fueling the rise of data breach misinformation is the speed at which fake data breach reports are spread online. In a recent blog post, Hunt wrote: “There are a couple of Twitter accounts in particular that are taking incidents that appear across a combination of a popular clear web hacking forum and various dark web ransomware websites and ‘raising them to the surface,’ so to speak. Incidents that may have previously remained on the fringe are being regularly positioned in the spotlight where they have much greater visibility.” “It’s getting very difficult at the moment because not only are there more breaches than ever, but there’s just more stuff online than ever,” Hunt says.  ... “We need to get everything out from in the shadows,” Callow says. “Far too much happens in the shadows. The more light can be shone on it, the better. That would be great in multiple ways. It’s not just a matter of removing some of the leverage threat actors have. It’s also giving the cybersecurity community and the government access to better data. Far too much goes unreported.” 


Making cybersecurity more appealing to women, closing the skills gap

From my perspective of a consistent interest in seeking industry expertise for guidance and mentorship in cyber security there is currently both a good news and bad news story. There are some strong examples out there, like Women in Cybersecurity, but I think women can be reluctant to join them because they don’t want to be different to their male counterparts and want to be part of an inclusive operating structure such as Tech Channel Ambassadors recently established to address this significant gap in the sector Personal mentorship can drive really positive change, and it’s certainly had a strong influence on my career. There’s still a shortfall in organised mentor programmes with businesses, but I see a lot of talented people identifying that gap and reaching out to support those starting out in their careers more proactively, which is fantastic. It’s important to realise that mentors don’t need to be within the same company or even the same industry. I’m currently mentoring one person within Sapphire and six others outside the company. Meanwhile, I’ve had four incredible mentors myself—and one of them is a CEO in the fashion industry. 


The power of persuasion: Google DeepMind researchers explore why gen AI can be so manipulative

Persuasion can be rational or manipulative — the difference being the underlying intent. The end game for both is delivering information in a way that will likely shape, reinforce or change a person’s behaviors, beliefs or preferences. But while rational gen AI delivers relevant facts, sound reasons or other trustworthy evidence with its outputs, manipulative gen AI exploits cognitive biases, heuristics and other misrepresenting information to subvert free thinking or decision-making, according to the DeepMind researchers. ... AI can build trust and rapport when models are polite, sycophantic and agreeable, praise and flatter users, engage in mimicry and mirroring, express shared interests, relational statements or adjust responses to align with users’ perspectives. Outputs that seem empathetic can fool people into thinking AI is more human or social than it really is. This can make interactions less task-based and more relationship-based, the researchers point out. “AI systems are incapable of having mental states, emotions or bonds with humans or other entities,” they emphasize. 


Federal Privacy Bill Aims To Consolidate US Privacy Law Patchwork

lthough it is not yet law, many observers are optimistic that the APRA will move forward due to its bipartisan support and the compromises it reaches on the issues of preemption and private rights of action, which have stalled prior federal privacy bills. The APRA contains familiar themes that largely mirror comprehensive state privacy laws, including the rights it provides to individuals and the duties it imposes on Covered Entities. This article discusses key departures from state privacy laws and new concepts introduced by the APRA. ... The APRA follows most state privacy laws with a broad definition of Covered Data, including any information that “identifies or is linked or reasonably linkable, alone or in combination with other information, to an individual or a device that identifies or is linked or reasonably linkable to one or more individuals.” The APRA would exclude employee information, de-identified data and publicly available information. Only the California Consumer Privacy Act (CCPA) currently includes employee information in its scope of covered data.


How cloud cost visibility impacts business and employment

What rings true is that engineers are on the frontline in controlling cloud costs. Indeed, they drive most overspending (or underspending) on cloud services and the associated fees. However, they are the most often overlooked asset when it comes to dealing with these costs. The main issue is that the cloud cost management team, including those responsible for managing finops programs, often lacks an understanding of engineers’ work and how it can affect costs. They assume that everything engineers do is necessary and unavoidable without considering there could be potential cost inefficiencies. When I work with a company to optimize their cloud costs, I often ask how they collaborate with their cloud engineers. I’m mostly met with questioning stares. This indicates that they don’t interact with the engineers in ways that would help optimize cloud costs. Companies look at their cloud engineers as if they were pilots of passenger aircraft—well-trained, following all best practices and procedures, and already optimizing everything. Unfortunately, this is not always true.


The Impact of Generative AI on Data Science

Organizations will continue to need more data scientists to fill a void in the specialized knowledge, as described above. To do so, enterprises may think to turn to the businessperson – say, the sales professional that automates quarterly financial reports. Berthold described this worker as one who can build a Microsoft Excel workflow and reliably do the data aggregation, avoiding errors. That employee can use an AI chat assistant to “upskill their Data Literacy,” says Berthold. They can come to the LLM with their “own perspective and vocabulary,” and learn. For example, they can get help on how to use Excel’s Visual Basic (VB) code to look up data and see what functions will be available, from there. ... So, AI and Data Governance need to put controls and safeguards in place. Berthold sees that a commercial platform, like his own company, can come in handy in supporting these requirements. He said, “Organizations need governance programs or IT departments to only allow access to the AI tools their workers use. Companies need to ensure they know what happens to the data before it gets sent to another person or system.“


NIST 2.0 as a Framework for All

Importantly, it’s not prescriptive in nature but focuses purely on what desirable outcomes are. This flexible approach focused on high-level outcomes instead of prescriptive solutions sends a clear message to the security practitioners globally – tailor the policies, controls and procedures to the specific outcomes that are relevant for your organisational context. The new version retains the five core functions of the original ie identify, protect, detect, respond and recover. ‘Identify’ catalogues the assets of the organisation and related cybersecurity risks and ‘Protect’ the safeguards and measures that can be taken to protect those assets. ‘Detect’ focuses on the means the organisation has to find and analyse anomalies, indicators of compromise and events, and ‘Respond’ the process that then happens when a threat is detected. Finally, ‘Recover’ looks at the capability of the organisation to restore and resume normal operations. However, under NIST 2.0 there is now a sixth function: ‘Govern’. ‘Govern’ is an overarching function that encompasses the other five and determines that the cybersecurity risk management strategy, expectations and policy are established, communicated, and monitored. 


Is Headless Banking the Next Evolution of BaaS?

Not every financial institution can just flip a switch and immediately offer headless banking services. Headless banking relies on robust APIs to allow partners to access back-end data structures and business logic, says Ryan Christiansen, executive director of Stema Center for Financial Technology at the University of Utah. “Take the Column example,” Christiansen says. “This is a very innovative bank in that its entire tech stack is built around just having these APIs. Most banks have not built their technology on this modern tech stack, which is APIs.” ... Regulators have been turning up the heat on sponsored banks and BaaS firms as a whole, which also slows down, or impedes altogether, potential entrants, Stema Center’s Christiansen says. “It’s a full Category 5 storm in the regulatory space,” he says. “There’s consent orders being handed out to these sponsor banks on a very routine basis right now. Regulators have decided that there’s not enough oversight of their fintech clients.” Sponsored banks, for example, are still required to abide by “very strict” KYC guidelines when new accounts are opened, even if a fintech partner is the one handling the account opening, Christansen says.



Quote for the day:

"Brilliant strategy is the best route to desirable ends with available means." -- Max McKeown

Daily Tech Digest - December 19, 2022

7 ways CIOs can build a high-performance team

“People want to grow and change, and good business leaders are willing to give them the opportunity to do so,” adds Cohn. Here, you can get HR involved, encouraging them to bring their expertise and ideas to the table to help you come up with the right approach to training and employee development. In addition, it’s important to remember that an empathetic leader understands that people come from different places and therefore won’t grow and develop in the same manner. Modern CIOs must approach upskilling and training with this reality in mind, advises Benjamin Marais, CIO at financial services company Liberty Group SA. You also need to create opportunities that expose your employees to what’s happening outside the business, suggests van den Berg. This is especially true where it pertains to future technologies and skills because if teams know what’s out there, they better understand what they need to do to keep up. Given the rise in competition for skills in the market, you have to demonstrate your best when trying to attract top talent and retain them, stresses Cohn. 


10 Trends in DevOps, Automated Testing and More for 2023

Developers and QA professionals are some of the most sought-after skilled laborers who are acutely aware of the value they provide to organizations. As we head into next year, this group will continue to leverage the demand for their skills in pursuit of their ideal work environment. Companies that do not consider their developer experience and force pre-pandemic systems onto a hybrid-first world set themselves up for failure, especially when tools for remote and virtual testing and quality assurance are readily available. Developer teams also need to be equally equipped for success through the tools and opportunities that can help ensure an innate sense of value to the organization – and if they don’t have the tools they need, these developers will find them elsewhere. ... We’re starting to see consolidation in both the market and in the user personas we’re all chasing. Testing companies are offering monitoring, and monitoring companies are offering testing. This is a natural outcome of the industry’s desire to move toward true observability: deep understanding of real-world user behavior, synthetic user testing, passively watching for signals and doing real-time root cause analysis—all in service of perfecting the customer experience.


The beautiful intersection of simulation and AI

Simulation models can synthesize real-world data that is difficult or expensive to collect into good, clean and cataloged data. While most AI models run using fixed parameter values, they are constantly exposed to new data that may not be captured in the training set. If unnoticed, these models will generate inaccurate insights or fail outright, causing engineers to spend hours trying to determine why the model is not working. ... Businesses have always struggled with time-to-market. Organizations that push a buggy or defective solution to customers risk irreparable harm to their brand, particularly startups. The opposite is true as “also-rans” in an established market have difficulty gaining traction. Simulations were an important design innovation when they were first introduced, but their steady improvement and ability to create realistic scenarios can slow perfectionist engineers. Too often, organizations try to build “perfect” simulation models that take a significant amount of time to build, which introduces the risk that the market will have moved on.


What is VPN split tunneling and should I be using it?

The ability to choose which apps and services use your VPN of choice and which don't is incredibly powerful. Activities like remote work, browsing your bank's website, or online shopping via public Wi-Fi can definitely benefit from the added security of a VPN, but other pursuits, like playing online games or streaming readily available content, can be hurt by the slight delay VPNs may add to your traffic. The modest decrease to your connection speed is barely noticeable for browsing, but can be disastrous for online games. Being able to simultaneously connect to sensitive sites and services through your secure VPN, and to non-sensitive games and apps means you won't constantly need to enable and disable your VPN connection when switching tasks. This is important as forgetting to enable it at the wrong time could leave you exposed to security risks. ... Split tunneling divides your network traffic in two. Your standard, unencrypted traffic continues to flow unimpeded down one path, while your sensitive and secured data gets encrypted and routed through the VPN's private network. It's like having a second network connection that's completely separate, a tiny bit slower, but also far more secure.


Why don’t cloud providers integrate?

Although it’s not an apples-to-apples comparison, Google’s Athos enables enterprises to run applications across clouds and other operating environments, including ones Google doesn’t control. As with Amazon DataZone, it’s very possible to manage third-party data sources. One senior IT executive from a large travel and hospitality company told me on condition of anonymity, “I’m sure [cloud vendors] can integrate with third-party services, but I suspect that’s not a choice they’re willing to make. For instance, they could publish some interfaces for third parties to integrate with their control plane as well as other means in the data plane.” Integration is possible, in other words, but vendors don’t always seem to want it. This desire to control sometimes leads vendors down roads that aren’t optimal for customers. As this IT executive said, “The ecosystem is being broken. Instead of interoperating with third-party services, [cloud vendors often] choose to create API-compatible competing services.” He continued, “There is a zero-sum game mindset here.” Namely, if a customer runs a third-party database and not the vendor’s preferred first-party database, the vendor has lost.


How RegTech helps financial services providers overcome regulation challenges

Two main types of RegTech capabilities are helping financial service institutions stay compliant: software that encompasses the whole system — for example a full client onboarding cycle — and software that manages a particular process, such as reporting or document management. Hugo Larguinho Brás explains: “The technologies that handle the whole process from A to Z are typically heavier to deploy, but they will allow you to cover most of your needs. These are also more expensive and often more difficult to adapt in line with a company’s specificities.” “Meanwhile, those technologies that treat part of the process can be combined with other tools. While this brings more agility, the need to find and combine several tools can also turn your target model more complex to run.” “We see more and more cloud and on-premises solutions available to asset management and securities companies, from software-as-a-service (SaaS) and platform-as-a-service (PaaS) deployed in-house, to solutions combined to outsourced capabilities ...”


What You Need to Know About Hyperscalers

Current hyperscaler adopters are primarily large enterprises. “The speed, efficiencies, and global reach hyperscalers can provide will surpass what most enterprise organizations can build within their own data centers,” Drobisewski says. He predicts that the partnerships being built today between hyperscalers and large enterprises are strategic and will continue to grow in value. “As hyperscalers maintain their focus on lifecycle, performance, and resiliency, businesses can consume hyperscaler services to thrive and accelerate the creation of new digital experiences for their customers,” Drobisewski says. ... Many adopters begin their hyperscaler migration by selecting the software applications that are best suited to run within a cloud environment, Hoecker says. Over time, these organizations will continue to migrate workloads to the cloud as their business goals evolve, he adds. Many hyperscaler adopters, as they become increasingly comfortable with the approach, are beginning to establish multi-cloud estates. “The decision criteria is typically based on performance, cost, security, access to skills, and regulatory and compliance factors,” Hoecker notes.


UID smuggling: A new technique for tracking users online

Researchers at UC San Diego have for the first time sought to quantify the frequency of UID smuggling in the wild, by developing a measurement tool called CrumbCruncher. CrumbCruncher navigates the Web like an ordinary user, but along the way, it keeps track of how many times it has been tracked using UID smuggling. The researchers found that UID smuggling was present in about 8 percent of the navigations that CrumbCruncher made. The team is also releasing both their complete dataset and their measurement pipeline for use by browser developers. The team’s main goal is to raise awareness of the issue with browser developers, said first author Audrey Randall, a computer science Ph.D. student at UC San Diego. “UID smuggling is more widely used than we anticipated,” she said. “But we don’t know how much of it is a threat to user privacy.” ... UID smuggling can have legitimate uses, the researchers say. For example, embedding user IDs in URLs can allow a website to realize a user is already logged in, which means they can skip the login page and navigate directly to content.


Bring Sanity to Managing Database Proliferation

How can you avoid being a victim of the bow wave of database proliferation? Recognize that you can allocate your resources in a way that benefits both your bottom line and your stress level by consolidating how you run and manage modern databases. Investing heavily in self-managing the legacy databases used in high volume by many of your people makes a lot of sense. Database workloads that are typically used for mission-critical transaction processing, such as IBM DB2 in financial services, are subject to performance tuning, regular patching and upgrading by specialized database administrators in a kind of siloed sanctum sanctorum. Many organizations will hire an in-house Oracle or SAP Hana expert and create a team, ... But what about the 40 other highly functional, highly desirable cloud databases in your enterprise that aren’t used as often? Do you need another 20 people to manage them? Open source databases like MySQL, MongoDB, Cassandra, PostgreSQL and many others have gained wide adoption, and many of their use cases are considered mission-critical. 


An Ode to Unit Tests: In Defense of the Testing Pyramid

What does the unit in unit tests mean? It means a unit of behavior. There's nothing in that definition dictating that a test has to focus on a single file, object, or function. Why is it difficult to write unit tests focused on behavior? A common problem with many types of testing comes from a tight connection between software structure and tests. That happens when the developer loses sight of the test goal and approaches it in a clear-box (sometimes referred to as white-box) way. Clear-box testing means testing with the internal design in mind to guarantee the system works correctly. This is really common in unit tests. The problem with clear-box testing is that tests tend to become too granular, and you end up with a huge number of tests that are hard to maintain due to their tight coupling to the underlying structure. Part of the unhappiness around unit tests stems from this fact. Integration tests, being more removed from the underlying design, tend to be impacted less by refactoring than unit tests. I like to look at things differently. Is this a benefit of integration tests or a problem caused by the clear-box testing approach? What if we had approached unit tests in an opaque-box approach?



Quote for the day:

"Strategy is not really a solo sport even if you_re the CEO." -- Max McKeown

Daily Tech Digest - June 14, 2022

Business Architecture - A New Depiction

Crucial to this depiction are components which exist in both the vertical pillars and the horizontal Business Architecture layer as follows: Application Architecture: includes the Business Process component, to associate application components (logical & operational) with the business activity they support. Information Architecture: includes the Information Component from a business perspective separately from any logical or operational representation of that information by data (structured or unstructured). Infrastructure Architecture: contains the location component. This is to recognize that business infrastructure is linked to an organization / location either by physical installation or network access. Business Architecture consists of these business components – shared with the other domains – and, in addition, more complex views which link the architecture with the business plans. For example, an architecture view for a business capability (as defined through capability-based planning) would show how the components support that capability. The 3 vertical domains can be considered to constitute IT Architecture (for the enterprise). 


Meet Web Push

One goal of the WebKit open source project is to make it easy to deliver a modern browser engine that integrates well with any modern platform. Many web-facing features are implemented entirely within WebKit, and the maintainers of a given WebKit port do not have to do any additional work to add support on their platforms. Occasionally features require relatively deep integration with a platform. That means a WebKit port needs to write a lot of custom code inside WebKit or integrate with platform specific libraries. For example, to support the HTML <audio> and <video> elements, Apple’s port leverages Apple’s Core Media framework, whereas the GTK port uses the GStreamer project. A feature might also require deep enough customization on a per-Application basis that WebKit can’t do the work itself. For example web content might call window.alert(). In a general purpose web browser like Safari, the browser wants to control the presentation of the alert itself. But an e-book reader that displays web content might want to suppress alerts altogether. From WebKit’s perspective, supporting Web Push requires deep per-platform and per-application customization.


Introduction to Infrastructure as Code - Part 1: Introducing IaC

In recent years, development has shifted away from monolithic applications and towards microservices architectures and cloud-native applications. However, modernizing apps introduces complexity, as maintaining the cloud computing architecture requires infrastructure automation tools, efficient provisioning, and scaling of new resources. Too many developers still see infrastructure provisioning and management as an opaque process that Ops teams perform using GUI tools like the Azure Portal. Infrastructure as code (IaC) challenges that notion. The practice of IaC unifies development and operations, creating a close bond between code and infrastructure. Why should we use IaC? When you develop an application, you create code, build and version it, and deploy the artifact through the DevOps pipeline. IaC allows you to create your infrastructure in the cloud using code, enabling you to version and execute that code whenever necessary. This three-article series starts with an introduction to IaC. Then, the following two articles in this series show how to use the Bicep language and Terraform HCL syntax to create templates and automatically provision resources on Azure.


VPN providers flee Indian market ahead of new data rules

The new directive by India's top cybersecurity agency, the Indian Computer Emergency Response Team (Cert-In), requires VPN, Virtual Private Server (VPS) and cloud service providers to store customers' names, email addresses, IP addresses, know-your-customer records, and financial transactions for a period of five years. SurfShark announced on Wednesday in a post titled "Surfshark shuts down servers in India in response to data law," that it "proudly operates under a strict "no logs" policy, so such new requirements go against the core ethos of the company." SurfShark is not the first VPN provider to pull its servers from the country following the directive. ExpressVPN also decided to take the same step just last week, and NordVPN has also warned that it will be removing physical servers if the directives are not reversed. ... Like many businesses around the world, Indian companies have increased their reliance on VPNs since the COVID-19 pandemic forced many employees to work from home. VPN adoption grew to allow employees to access sensitive data remotely, even as companies started adopting other secure means to allow remote access such as Zero Trust Network Access and Smart DNS solutions.


5 top deception tools and how they ensnare attackers

To work, deception technologies essentially create decoys, traps that emulate natural systems. These systems work because of the way most attackers operate. For instance, when attackers penetrate the environment, they typically look for ways to build persistence. This typically means dropping a backdoor. In addition to the backdoor, attackers will attempt to move laterally within organizations, naturally trying to use stolen or guessed access credentials. As attackers find data and systems of value, they will deploy additional malware and exfiltrate data, typically using the backdoor(s) they dropped. With traditional anomaly detection and intrusion detection/prevention systems, enterprises try to spot these attacks in progress on their entire networks and systems. Still, the problem is these tools rely on signatures or susceptible machine learning algorithms and throw off a tremendous number of false positives. Deception technologies, however, have a higher threshold to trigger events, but these events tend to be real threat actors conducting real attacks.


MIT built a new reconfigurable AI chip that can reduce electronic waste

The team's optical communication system comprises paired photodetectors and LEDs patterned with tiny pixels. The photodetectors feature an image sensor for receiving data, and LEDs transmit that data to the next layer. Since the components must work like a LEGO-like reconfigurable AI chip, they must be compatible. "The sensory chip at the bottom receives signals from the outside environment and sends the information to the next chip above by light signals. The next chip, which is a processor layer, receives the light information and then processes the pre-programmed function. Such light-based data transfer continues to other chips above, thus performing multi-functional tasks as a whole," the team explained. ... The team fabricated a single chip with a computing core that measured about four square millimeters. The chip is stacked with three image recognition "blocks", each comprising an image sensor, optical communication layer, and artificial synapse array for classifying one of three letters, M, I, or T. They then shone a pixellated image of random letters onto the chip and measured the electrical current that each neural network array produced in response.


Augmented reality head-up displays: Navigating the next-gen driving experience

HUDs work by projecting a transparent 2D or 3D digital image of navigational and hazard warning information, for example, onto the windscreen of the vehicle. These projected images then merge with the driver's view of the road ahead. Windshield HUDs, for example, are set up so that the driver does not need to shift their gaze away from the road in order to view the relevant, timely information. This technology helps to keep the driver's attention on the road, as opposed to the driver having to look down at the dashboard or navigation system. Technological advances in this area have led to HUDs with holographic displays and AR in 3D. This added depth perception makes it possible to project computer-generated virtual objects in real time into the driver's field of view to warn, inform or entertain the user. The driver's alertness to road obstacles is increased by enabling shorter obstacle visualization times, and eye strain and driving stress levels are reduced. "Holographic HUDs are paramount if we are to explore the possibilities of augmented and mixed reality for road safety," said Jana


Nigerian Police Bust Gang Planning Cyberattacks on 10 Banks

The operation was a coordinated effort between the Economic and Financial Crimes Commission of Nigeria, Interpol, the National Central Bureaus and law enforcement agencies of 11 countries across Southeast Asia, according to Interpol. The operation was initiated after Interpol's private sector partner Trend Micro provided operational intelligence to the agency about the "emergence and usage of Agent Tesla malware" in this case. Agent Tesla was found on the mobile phones and laptops of the syndicate members that were seized by the EFCC during the bust. "Through its global police network and constant monitoring of cyberspace, Interpol had the globally sourced intelligence needed to alert Nigeria to a serious security threat where millions could have been lost without swift police action," Interpol Director of Cybercrime Craig Jones says in the statement. "Further arrests and prosecutions are foreseen across the world as intelligence continues to come in and investigations unfold." 


10 ways DevOps can help reduce technical debt

In most cases, technical debt occurs because development teams take shortcuts to meet tight deadlines and struggle with constant changes. But better collaboration between dev and ops can shorten SDLC, fasten deployments, and increase their frequency. Moreover, CI/CD and continuous testing make it easier for teams to deal with changes. Overall, the collaborative culture encourages code reviews, good coding practices, and robust testing with mutual help. ... Technical debt is best controlled when managed continuously, which becomes easier with DevOps. As it facilitates constant communication, teams can track debt, facilitate awareness and resolve it as soon as possible. Team leaders can also include technical debt review into backlog and schedule maintenance sprints to deal with it promptly. Moreover, DevOps reduces the chances of incomplete or deferred tasks in the backlog, helping prevent technical debt. ... A true DevOps culture can be the key to managing technical debt over long periods. DevOps culture encourages strong collaboration between cross-functional teams, provides autonomy and ownership, and practices continuous feedback and improvement.


Once is never enough: The need for continuous penetration testing

The traditional attitude to manual pen testing is kind of like the traditional approach to driving navigation: nothing can replace the sophistication and accrued knowledge of a human. A taxi driver will always beat Google Maps, and a trained pen testing professional will find vulnerabilities and attacks that automated tests may miss, or identify responses that appear legitimate to automated software but are actually a threat. The truth is, on a case-by-case basis, this could conceivably be true. But with off-the-shelf tools and services like RaaS (Ransomware as a Service) or MaaS (Malware as a Service) that use AI/ML capabilities to enhance attack efficiency – you’d need an army of pen testers to truly meet the challenges of today’s cyber threats. And once you’d found, trained and employed them – cyberattackers would simply increase their automation efforts and you’d need to draft another army. Not a sustainable cybersecurity model, clearly. Similarly, the widescale adoption of agile development methodologies has translated into increasingly frequent software releases.



Quote for the day:

"If you are truly a leader, you will help others to not just see themselves as they are, but also what they can become." -- David P. Schloss