Showing posts with label BCI. Show all posts
Showing posts with label BCI. Show all posts

Daily Tech Digest - April 13, 2025


Quote for the day:

"I've learned that people will forget what you said, people will forget what you did, but people will never forget how you made them feel." -- Maya Angelou



The True Value Of Open-Source Software Isn’t Cost Savings

Cost savings is an undeniable advantage of open-source software, but I believe that enterprise leaders often overlook other benefits that are even more valuable to the organization. When developers use open-source tools, they join a collaborative global community that is constantly learning from and improving on the technology. They share knowledge, resources and experiences to identify and fix problems and move updates forward more rapidly than they could individually. Adopting open-source software can also be a win-win talent recruitment and retention strategy for your enterprise. Many individual contributors see participating in open-source software communities as a tangible way to build their own profiles as experts in their field—and in the process, they also enhance your company’s reputation as a cool place where tech leaders want to work. However, there’s no such thing as a free meal. Open-source software isn't immune to vendor lock-in, when your company becomes so dependent on a partner’s product that it is prohibitively costly or difficult to switch to an alternative. You may not be paying licensing fees, but you still need to invest in support contracts for open-source tools. The bigger challenge from my perspective is that it’s still rare for enterprises to contribute regularly to open-source software communities. 


The Growing Cost of Non-Compliance and the Need for Security-First Solutions

Regulatory bodies across the globe are increasing their scrutiny and enforcement actions. Failing to comply with well-established regulations like HIPAA or GDPR, or newer ones like the European Union’s Digital Operational Resilience Act (DORA) and NY DFS Cybersecurity requirements, can result in penalties that can reach millions of dollars. But the costs do not stop there. Once a company has been found to be non-compliant, it often faces reputational damage that extends far beyond the immediate legal repercussions. ... A security-first approach goes beyond just checking off boxes to meet regulatory requirements. It involves implementing robust, proactive security measures that safeguard sensitive data and systems from potential breaches. This approach protects the organization from fines and builds a strong foundation of trust and resilience in the face of evolving cyber threats. ... Many businesses still rely on outdated, insecure methods of connecting to critical systems through terminal emulators or “green screen” interfaces. These systems, often running legacy applications, can become prime targets for cybercriminals if they are not properly secured. With credential-based attacks rising, organizations must rethink how they secure access to their most vital resources.


Researchers unveil nearly invisible brain-computer interface

Today's BCI systems consist of bulky electronics and rigid sensors that prevent the interfaces from being useful while the user is in motion during regular activities. Yeo and colleagues constructed a micro-scale sensor for neural signal capture that can be easily worn during daily activities, unlocking new potential for BCI devices. His technology uses conductive polymer microneedles to capture electrical signals and conveys those signals along flexible polyimide/copper wires—all of which are packaged in a space of less than 1 millimeter. A study of six people using the device to control an augmented reality (AR) video call found that high-fidelity neural signal capture persisted for up to 12 hours with very low electrical resistance at the contact between skin and sensor. Participants could stand, walk, and run for most of the daytime hours while the brain-computer interface successfully recorded and classified neural signals indicating which visual stimulus the user focused on with 96.4% accuracy. During the testing, participants could look up phone contacts and initiate and accept AR video calls hands-free as this new micro-sized brain sensor was picking up visual stimuli—all the while giving the user complete freedom of movement.


Creating SBOMs without the F-Bombs: A Simplified Approach to Creating Software Bills of Material

It's important to note that software engineers are not security professionals, but in some important ways, they are now being asked to be. Software engineers pick and choose from various third-party and open source components and libraries. They do so — for the most part — with little analysis of the security of those components. Those components can be — or become — vulnerable in a whole variety of ways: Once-reliable code repositories can become outdated or vulnerable, zero days can emerge in trusted libraries, and malicious actors can — and often do — infect the supply chain. On top of that, risk profiles can change overnight, making what was a well considered design choice into a vulnerable one almost overnight. Software engineers never before had to consider these things, and yet the arrival of the SBOM is making them do so like never before. Customers can now scrutinize their releases, and then potentially reject or send them back for fixing — resulting in even more work on short notice and piling on pressure. Even if the risk profile of a particular component changes between the creation of an SBOM and a customer reviewing it, then the release might be rejected. This is understandably the cause of much frustration for software engineers who are often already under great pressure.


Risk & Quality: The Hidden Engines of Business Excellence

In the world of consultancy, firms navigate a minefield of challenges—tight deadlines, budget constraints, and demanding clients. Then, out of nowhere, disruptions such as regulatory shifts or resource shortages strike, threatening project delivery. Without a robust risk management framework, these disruptions can snowball into major financial and reputational losses. ... Some leaders see quality assurance as an added expense, but in reality, it’s a profit multiplier. According to the American Society for Quality (ASQ), organizations that emphasize quality see an average of 4-6% revenue growth compared to those that don’t. Why? Because poor quality leads to rework, client dissatisfaction, and reputational damage. ... The cost of poor quality is substantial. Firms that don’t embed quality into their culture ultimately face consequences like customer churn, regulatory fines, and declining market share. Additionally, fixing mistakes after the fact is far more expensive than ensuring quality from the outset. Organizations that invest in quality from the start avoid unnecessary costs, improve efficiency, and strengthen their bottom line. As Philip Crosby, a pioneer in quality management, stated, “Quality is free. It’s not a gift, but it’s free. What costs money are the unquality things—all the actions that involve not doing jobs right the first time.” 


Enabling a Thriving Middleware Market

A more unified regulatory approach could reduce uncertainty, streamline compliance, and foster an ecosystem that better supports middleware development. However, given the unlikelihood of creating a new agency, a more feasible approach would be to enhance coordination among existing regulators. The FTC could address antitrust concerns, the FCC could promote interoperability, and the Department of Commerce could support innovation through trade policies and the development of technical standards. Even here, slow rulemaking and legal challenges could hinder progress. Ensuring agencies have the necessary authority, resources, and expertise will be critical. A soft-law approach, modeled after the National Institute for Standards and Technology (NIST) AI Risk Management Framework, might be the most feasible option. A Middleware Standards Consortium could help establish best practices and compliance frameworks. Standards development organizations (SDOs), such as the Internet Engineering Task Force or the World Wide Web Consortium (W3C), are well-positioned to lead this effort, given their experience crafting internet protocols that balance innovation with stability. For example, a consortium of SDOs with buy-in from NIST could establish standards for API access, data portability, and interoperability of several key social media functionalities.


How to Supercharge Application Modernization with AI

The refactoring of code – which means restructuring and, often, partly rewriting existing code to make applications fit a new design or architecture – is the most crucial part of the application modernization process. It has also tended in the past to be the most laborious because it required developers to pore over often very large codebases, painstakingly tweaking code function-by-function or even line-by-line. AI, however, can do much of this dirty work for you. Instead of having to find places where code should be rewritten or modified in order to optimize it, developers can leverage AI tools to look for code that requires attention. ... When you move applications to the cloud, the infrastructure that hosts them is effectively a software resource – which means you can configure and manage it using code. By extension, you can use AI tools like Cursor and Copilot to write and test your code-based infrastructure configurations. Specifically, AI is capable of tasks such as writing and maintaining the code that manages CI/CD pipelines or cloud servers. It can also suggest opportunities to optimize existing infrastructure code to improve reliability or security. And it can generate the ancillary configurations, such as Identity and Access Management (IAM) policies, that govern and help to secure cloud infrastructure.


Balancing Generative AI Risk with Reward

As businesses start evolving in their use of this technology and exposing it to a broader base inside and outside their companies, risks can increase. “I’ve always loved to say AI likes to please,” said Danielle Derby, director of enterprise data management at TriNet, who joined Rodarte at the presentation. Risk manifests “because AI doesn’t know when to stop,” said Derby, and you, for example, may not have thought about including a human or technology guardrail to keep it from answering a question you hadn’t prepared it to be able to accurately manage. “There are a lot of areas where you’re just not sure how someone who’s not you is going to handle this new technology,” she said. ... Improper data splitting can lead to data leakage, resulting in overly optimistic model performance, which you can mitigate by using techniques like stratified sampling to ensure representative splits and by always splitting the data before performing any feature engineering or preprocessing. Inadequate training data can lead to overfitting and too little test data can yield unreliable performance metrics, and you can mitigate these by ensuring there is enough data for both training and testing based on problem size, and using a validation set in addition to training and test sets.


Why Cybersecurity-as-a-Service is the Future for MSPs and SaaS Providers

For MSPs and SaaS providers, adopting a proactive, scalable approach to cybersecurity—one that provides continuous monitoring, threat intelligence, and real-time response—is crucial. By leveraging Cybersecurity-as-a-Service (CSaaS), businesses can access enterprise-grade security without the need for extensive in-house expertise. This model not only enhances threat detection and mitigation but also ensures compliance with evolving cybersecurity regulations. ... The increasing complexity and frequency of cyber threats necessitate a proactive and scalable approach to security. CSaaS offers a flexible solution by outsourcing critical security functions to specialized providers. This ensures continuous monitoring, threat intelligence, and incident response without the need for extensive in-house resources. As cyber threats evolve, CSaaS providers continuously update their tools and techniques, ensuring we stay ahead of emerging vulnerabilities. CSaaS enhances our ability to protect sensitive data and allows us to confidently focus on core business operations. As threats evolve, CSaaS providers continually update their tools and techniques, ensuring companies stay ahead of emerging vulnerabilities. ... Embracing CSaaS is essential for maintaining a robust security posture in an increasingly complex digital landscape.


Meta: WhatsApp Vulnerability Requires Immediate Patch

Meta has voluntarily disclosed the new WhatsApp vulnerability, now published as CVE-2025-30401, after investigating it internally as a submission to its bug bounty program. The company says there is not yet evidence that it has been exploited in the wild. The issue likely impacts all Windows versions prior to 2.2450.6. The WhatsApp vulnerability hinges on an attacker sending a malicious attachment, and would require the target to attempt to manually view the attachment within the software. A spoofing issue makes it possible for the file opening handler to execute code that has been hidden as a seemingly valid MIME type such as an image or document. That could pave the way for remote code execution, though a CVE score has yet to be assigned as of this writing. ... The WhatsApp vulnerability exploited by Paragon was a much more devastating zero-click (and one that targeted phones and mobile devices), similar to one exploited by NSO Group on the platform to compromise over a thousand devices. That landed the spyware vendor in trouble in US courts, where it was found to have violated national hacking laws. The court found that NSO Group had obtained WhatsApp’s underlying code and reverse-engineered it to create at least several zero-click vulnerabilities that it put to use in its spyware.

Daily Tech Digest - February 22, 2022

Partner Across Teams to Create a Cybersecurity Culture

Just because a software engineer doesn’t work on the security team doesn’t mean that security isn’t their responsibility. In addition to the standard security training, you can further empower your engineering teams by training and encouraging them to think like hackers. I was fortunate enough to work for a company some time ago that scheduled annual competitions with prizes and bragging rights. These competitions served as security training and engaged us in a series of engineering puzzles that included SQL injection, cross-site scripting (XSS), cryptography and social engineering. ... Even with well-implemented training programs and a dedicated cadre of security-minded engineers building your applications, there is still plenty for your security engineers to work on. The shared-responsibility model will reduce the risk of successful phishing attacks or other malicious activity, but it won’t remove it entirely. Ideally, security teams will move from a place where they are constantly fighting fires to one where they can engage in strategic initiatives to further improve security for the organization, automate risk detection wherever possible, and prepare your organization for the future.


Agile Doesn’t Work Without Psychological Safety

Soon after implementing agile, many organizations revert to the default position of worshiping at the altar of technical processes and tools, because cultural considerations seem abstract and difficult to operationalize. It’s easier to pay lip service to the human side and then move on to scrumming, sprinting, kanbaning, and kaizening because these processes serve as tangible, measurable, and observable indicators, giving the illusion of success and the appearance of developing agile at scale. Begin your agile transformation by framing agile as a cultural rather than a technical or mechanical implementation. In doing so, be careful not to approach culture as a workstream. A workstream is defined as the progressive completion of tasks required to finish a project. When we approach culture as a workstream within the context of agile, we classify it as something that can be completed. Culture cannot be completed. Yet I see agile teams attempting to project-manage it as part of the work breakdown structure, as if it has a beginning, middle, and end. It doesn’t.


Inside the U.K. lab that connects brains to quantum computers

While BCIs and quantum computers are undoubtedly promising technologies emerging at the same point in history, the question is why bring them together – which is exactly what the consortium of researchers from the U.K.’s University of Plymouth, Spain’s University of Valencia and University of Seville, Germany’s Kipu Quantum, and China’s Shanghai University are seeking to do. Technologists love nothing more than mashing together promising concepts or technologies in the belief that, when united, they will represent more than the sum of their parts. Sometimes this works gloriously. As the venture capitalist Andrew Chen describes in his book The Cold Start Problem, Instagram leveraged the emergence of camera-equipped smartphones and the simultaneous powerful network effects of social media to become one of the fastest-growing apps in history. Taking two must-have technologies and combining them doesn’t always work, though. Apple CEO Tim Cook once quipped that “you can converge a toaster and a refrigerator, but, you know, those things are probably not going to be pleasing to the user.”


Three ways COVID-19 is changing how banks adapt to digital technology

Bank leaders face the difficult task of balancing the traditional approach to risk management with the need to respond quickly to a crisis that has created massive changes to their operating environment. Criminal cyber activity, including fraud and phishing attacks, has increased as more employees work remotely. However, as one participant said: “We have not yet seen the massive increase in sophisticated, advance persistent threat cyber attacks that we normally associate with events like these.” As banks shift from crisis mode, their boards need to address new emerging risks, such as video and voice communication surveillance with everyone using Zoom and other platforms, data security controls for the use of personal equipment, and cases of third and fourth parties falling victim to cyber issues. ... As the economic impacts of the pandemic become clearer, banks are updating risk models and stress scenarios in an attempt to stay ahead of the curve. However, uncertainty in the operating environment continues to pose challenges. A lack of regulatory harmonization may further complicate benchmarking among peers across countries, though there is hope that this will improve soon.


The threat of quantum computing to security infrastructure

The report states:”The encryption technologies that are securing Canada’s financial systems today will one day become obsolete. If we do nothing, the financial data that underpins Canada’s economy will inevitably become more vulnerable to cyber criminals.” In the US, as noted above, the National Security Agency took an early lead in identifying the perceived threat. On January 19, 2022, an action from the US president came public. The White House issued a “Memorandum on Improving the Cybersecurity of National Security, Department of Defense and Intelligence Community Systems.” The document shows the urgency needed to address perceived major threats. It outlines major actions to avoid security lapses that would be created by quantum computers targeting critical secret data and related infrastructure. It also identifies the management responsibilities in the various agencies to implement these measure within a matter of months. This perceived threat to existing cybersecurity will generate a great deal of private industry and bring well-funded new companies into the business of transition to new security solutions.


AI fairness in banking: the next big issue in tech

“People want to be treated fairly by an agent whether artificial or not. The difference for a lot of applications is that people are not aware of the full extent of the decision making and the statistical regularities across a larger population where some of these issues can arise. There is a lot of cynicism around these decisions.” He adds that there are technical as well as organisational solutions that financial services providers need to apply. This, combined with policies of transparency about the processes in place all combine to provide an overall strategy. He adds: “The first thing is to have processes of regularly reporting on and examining and making corrections to data that is used to train models as well as to test them. “So, a simple test is representation of people that belong to legally protected categories by race, age, gender, ethnic origin and religious status to determine if there is enough data to represent each of these groups with accurate models. In addition, these is a need to determine whether there are other inputs to the model or features that could be corelated with these protected classes and have a potentially adverse or discriminatory impact on the output of the model.”


4 common misunderstandings about enterprise open source software

It might seem natural to download community-supported bits from the Internet rather than purchase an integrated product. This is especially the case when the community projects are relatively simple and self-contained or if you have reasons to develop independent expertise or do extensive customization. (Although working with a vendor to get needed changes into the upstream project is a possible alternative in the latter case.) However, if the software isn’t a differentiating capability for your business, hiring the right highly-skilled engineers is neither easy nor cheap. There’s also the ongoing support burden if your downloaded projects turn into a fork of the upstream community project. And if you don’t want them to, you’ll need to factor in the time to work in the upstream projects to get needed features added. There’s also a lot of complexity in categories like enterprise open source container platforms in the cloud-native space. Download Kubernetes? You’re just getting started. How about monitoring, distributed tracing, CI/CD, serverless, security scanning, and all the other features you’ll want in a complete platform? 


Leadership when the chips are down

Particularly noteworthy is the obsessive nature of Shackleton’s encounter with a territory so resistant to accurate perception. We risk bathos to say that the business landscape presents challenges on a par with the South Pole, yet the perceptual difficulties posed by Antarctica offer clear parallels for executives and entrepreneurs. The southernmost continent is unpredictable, unstable, and unforgiving. Compasses don’t behave normally. Much of what appears terra firma is actually floating ice, and deadly crevasses lurk under the snow. Snow blindness, a painful effect of the dazzling surroundings, can make vision itself impossible. ... Shackleton’s failings as a manager were manifest in his planning for the Heart of the Antarctic expedition. For a trip on foot of 1,720 miles to and from the Pole, his four-man unit brought food for just 91 days of hard labor, high altitude, and mind-numbing cold. His return instructions to the crew of the Nimrod, the ship that dropped off his party, were impossibly vague. 


How can banks remain relevant in the fastest growing digital market in the world?

While bolting on a digital banking system may be a quick fix for incumbents, the only way for FIs to truly keep up with the pace of change and future-proof their business is to invest in modern architecture which offers them the flexibility required to develop and deploy products and services at speed. Built with advanced customisation at their core, modern platforms enable FIs to approach product development with a different mindset to those struggling with legacy systems. As a result, FIs benefit from faster time-to-market, being able to scale up innovative digital operations, offer new products or services, and respond to ever-changing market requirements much faster. Shifting consumer behaviours, coupled with intensified competition, is making it increasingly difficult for banks in the APAC region to remain relevant. They are fighting not only to keep their loyal customer base, but stay ahead of the curve by offering customers the advanced digital services they require. Only by ensuring they have a comprehensive, future-proof system in place, underpinning their operations, will they truly be able to embrace the digital future.


Sustaining Agile Transformation – Our Experience

The organization needs to rethink and create a career roadmap for the Agile roles like Product Owner, Scrum Master, and Developers. The organization must build and enhance the self-paced learning experience, embed learning experience, develop role-based training, develop new learning areas, etc. For certain key roles, the organizations can focus on establishing academies such as Scrum Master Academy. This will ensure there is continuous learning and flow of trained Scrum Masters as and when needed. Coaching skills should be taught and embedded in Agile leaders and change agents. Ensure Leaders are trained and embrace foundational values and principles. Establishing and retaining a Central team such as a lean CoE will be very beneficial to oversee the transformation and support when needed. The organization can deliberate on the establishment of the CoE at divisional or organization levels. Collaborative forums like the CoPs, Guilds, Chapters, etc. should be established and run successfully. 



Quote for the day:

"Leaders must see the dream in their mind before they will accomplish the dream with their team." -- Orrin Woodward

Daily Tech Digest - October 17, 2021

Multi-User IP Address Detection

When an Internet user visits a website, the underlying TCP stack opens a number of connections in order to send and receive data from remote servers. Each connection is identified by a 4-tuple (source IP, source port, destination IP, destination port). Repeating requests from the same web client will likely be mapped to the same source port, so the number of distinct source ports can serve as a good indication of the number of distinct client applications. By counting the number of open source ports for a given IP address, you can estimate whether this address is shared by multiple users. User agents provide device-reported information about themselves such as browser and operating system versions. For multi-user IP detection, you can count the number of distinct user agents in requests from a given IP. To avoid overcounting web clients per device, you can exclude requests that are identified as triggered by bots and we only count requests from user agents that are used by web browsers. There are some tradeoffs to this approach: some users may use multiple web browsers and some other users may have exactly the same user agent. 


Critical infrastructure security dubbed 'abysmal' by researchers

"While nation-state actors have an abundance of tools, time, and resources, other threat actors primarily rely on the internet to select targets and identify their vulnerabilities," the team notes. "While most ICSs have some level of cybersecurity measures in place, human error is one of the leading reasons due to which threat actors are still able to compromise them time and again." Some of the most common issues allowing initial access cited in the report include weak or default credentials, outdated or unpatched software vulnerable to bug exploitation, credential leaks caused by third parties, shadow IT, and the leak of source code. After conducting web scans for vulnerable ICSs, the team says that "hundreds" of vulnerable endpoints were found. ... Software accessible with default manufacturer credentials allowed the team to access the water supply management platform. Attackers could have tampered with water supply calibration, stop water treatments, and manipulate the chemical composition of water supplies.


What is a USB security key, and how do you use it?

There are some potential drawbacks to using a hardware security key. First of all, you could lose it. While security keys provide a substantial increase in security, they also provide a substantial increase in responsibility. Losing a security key can result in a serious headache. Most major websites suggest that you set up backup 2FA methods when enrolling a USB security key, but there's always a small but real chance that you could permanently lose access to a specific account if you lose your key. Security-key makers suggest buying more than one key to avoid this situation, but that can quickly get expensive. Cost is another issue. A hardware security key is the only major 2FA method for which you have to spend money. You can get a basic U2F/WebAuthn security key standards for $15, but some websites and workplaces require specialized protocols for which compatible keys can cost up to $85 each. Finally, limited usability is also a factor. Not every site supports USB security keys. If you're hoping to use a security key on every site for which you have an account, you're guaranteed to come across at least a few that won't accept your security key.


Future-proofing the organization the ‘helix’ way

The leaders need a high level of domain expertise, obviously, but other skills as well. As capability managers, these leaders must excel at strategic workforce management, for example—not short-sighted resource attribution for the products at hand, but the strategic foresight and long-term perspective to understand what the workload will be today, tomorrow, three to five years from now. They need to understand what skills they don’t have in-house and must acquire or build. These leaders become supply-and-demand managers of competence. They must also be excellent—and rigid—portfolio managers who make their resource decisions in line with the overall transformation. The R&D organization, for example, cannot start research projects inside a product line whose products are classified as “quick return,” even if they have people idle. It’s a different mindset. In fact, R&D leaders don’t necessarily have to be the best technologists in order to be successful. They must be farsighted and able to anticipate trends—including technological trends—but ultimately what matters is their ability to build the department in a way that ensures it’s ready to carry the demands of the organization going forward.


Robots Will Replace Our Brains

Over the years, despite numerous fruitless attempts, no one has come close to achieving the recreation of this organ with all its intricate details; it is challenging to fathom such an invention in the scientific world at this point, considering the discoveries that surface every other day. As one research director mentions, we are very good at gathering data and developing algorithms to reason with that data. Nevertheless, that reasoning is only as sound as the data, one step removed from reality for the AI we have now. For instance, all science fiction movies conceive movies that depict only a mere thin line that separates human intelligence from artificial intelligence. ... A new superconducting switch is being constructed by researchers at the U.S. National Institute of Standards and Technology (NIST) and updated that will soon enable computers to analyze and make decisions just like humans do. The conclusive goal is to integrate this switch into everyday life; from transportation to medicine, this invention also contains an artificial synapse, processes electrical signals just like a biological synapse does, and converts it to an adequate output, just like the brain does.


Data Storage Strategies - Simplified to Facilitate Quick Retrieval of Data and Security

No matter what the reason for the downtime, it may be very costly. An efficient data strategy goes beyond just deciding where data will be kept on a server. When it comes to disaster recovery, hardware failure, or a human mistake, it must contain methods for backing up the data and ensuring that it is simple and fast to restore. Putting in place a disaster recovery plan, although it is a good start and guarantees that data and the related systems are available after a minimum of disruption is experienced. Cloud-based disaster recovery, as well as virtualization, are now required components of every disaster recovery strategy. They can work together to assure you that no customer will ever experience more downtime than they can afford at any given moment. By relying only on the cloud storage service, the company can outsource the storage issue. By using online data storage, the business can minimize the costs associated with internal resources. With this technology, the business does not need any internal resources or assistance to manage and keep their data; the Data warehousing consulting services provider takes care of everything. 


RISC-V: The Next Revolution in the Open Hardware Movement

You could always build your own proprietary software and be better than your competitors, but the world has changed. Now almost everyone is standing on the shoulders of giants. When you need an operating system kernel for a new project, you can use Linux directly. No need to recreate a kernel from scratch, and you can also modify it for your own purpose (or write your drivers). You’ll be certain to rely on a broadly tested product because you are just one of a million users doing the same. That would be exactly what relying on an open source CPU architecture could provide. No need to design things from scratch; you can innovate on top of the existing work and focus on what really matters to you, which is the value you are adding. At the end of the day, it means lowering the barriers to innovate. Obviously, not everyone is able to design an entire CPU from scratch, and that’s the point: You can bring only what you need or even just enjoy new capabilities provided by the community, exactly the same way you do with open source software, from the kernel to languages.


The Conundrum Of User Data Deletion From ML Models

As the name says, approximation deletion enables us to eliminate the majority of the implicit data associated with users from the model. They are ‘forgotten,’ but only in the sense that our models can be retrained at a more opportune time. Approximate deletion is particularly useful for rapidly removing sensitive information or unique features associated with a particular individual that could be used for identification in the future while deferring computationally intensive full model retraining to times of lower computational demand. Approximate deletion can even accomplish the exact deletion of a user’s implicit data from the trained model under certain assumptions. The deletion challenge has been tackled differently by researchers than by their counterparts in the field. Additionally, the researchers describe a novel approximate deletion technique for linear and logistic models that is feature-dimensionally linear and independent of training data. This is a significant improvement over conventional systems, which are superlinearly dependent on the extent at all times.


9 reasons why you’ll never become a Data Scientist

Have you ever invested an entire weekend in a geeky project? Have you ever spent your nights browsing GitHub while your friends were out to party? Have you ever said no to doing your favorite hobby because you’d rather code? If you could answer none of the above with yes, you’re not passionate enough. Data Science is about facing really hard problems and sticking at them until you found a solution. If you’re not passionate enough, you’ll shy away at the sight of the first difficulty. Think about what attracts you to becoming a Data Scientist. Is it the glamorous job title? Or is it the prospect of plowing through tons of data on the search for insights? If it is the latter, you’re heading in the right direction. ... Only crazy ideas are good ideas. And as a Data Scientist, you’ll need plenty of those. Not only will you need to be open to unexpected results — they occur a lot! But you’ll also have to develop solutions to really hard problems. This requires a level of extraordinary that you can’t accomplish with normal ideas. 


Why Don't Developers Write More Tests?

If deadlines are tight or the team leaders aren’t especially committed to testing, it is often one of the first things software developers are forced to skip. On the other hand, some developers just don’t think tests are worth their time. “They might think, ‘this is a very small feature, anyone can create a test for this, my time should be utilized in something more important.’” Mudit Singh of LambdaTest told me. ... In truth, there are some legitimate limitations to automated tests. Like many complicated matters in software development, the choice to test or not is about understanding the tradeoffs. “Writing automated tests can provide confidence that certain parts of your application work as expected,” Aidan Cunniff, the CEO of Optic told me, “but the tradeoff is that you’ve invested a lot of time ‘stabilizing’ and making ‘reliable’ that part of your system.” ... While tests might have made my new feature better and more maintainable, they were technically a waste of time for the business because the feature wasn’t really what we needed. We failed to invest enough time understanding the problem and making a plan before we started writing code.



Quote for the day:

"Leaders are readers, disciples want to be taught and everyone has gifts within that need to be coached to excellence." -- Wayde Goodall

Daily Tech Digest - August 21, 2021

Can AGI take the next step toward genuine intelligence?

To take the next step on the road to genuine intelligence, AGI needs to create its underpinnings by emulating the capabilities of a three-year-old. Take a look at how a three-year-old playing with blocks learns. Using multiple senses and interaction with objects over time, the child learns that blocks are solid and can’t move through each other, that if the blocks are stacked too high they will fall over, that round blocks roll and square blocks don’t, and so on. A three-year-old, of course, has an advantage over AI in that he or she learns everything in the context of everything else. Today’s AI has no context. Images of blocks are just different arrangements of pixels. Neither image-based AI (think facial recognition) nor word-based AI (like Alexa) has the context of a “thing” like the child’s block which exists in reality, is more-or-less permanent, and is susceptible to basic laws of physics. This kind of low-level logic and common sense in the human brain is not completely understood but human intelligence develops within the context of human goals, emotions, and instincts. Humanlike goals and instincts would not form the best basis for AGI.


How to take advantage of Android 12’s new privacy options

First and foremost in the Android 12 privacy lineup is Google’s shiny new Privacy Dashboard. It’s essentially a streamlined command center that lets you see how different apps are accessing data on your device so you can clamp down on that access as needed. ... Next on the Android 12 privacy list is a feature you’ll occasionally see on your screen but whose message might not always be obvious. Whenever an app is accessing your phone’s camera or microphone — even if only in the background — Android 12 will place an indicator in the upper-right corner of your screen to alert you. When the indicator first appears, it shows an icon that corresponds with the exact manner of access. But that icon remains visible only for a second or so, after which point the indicator changes to a tiny green dot. So how can you know what’s being accessed and which app is responsible? The secret is in the swipe down: Anytime you see a green dot in the corner of your screen, swipe down once from the top of the display. The dot will expand back to that full icon, and you can then tap it to see exactly what’s involved.


Achieving Harmonious Orchestration with Microservices

The interdependency of your microservices-based architecture also complicates logging and makes log aggregation a vital part of a successful approach. Sarah Wells, the technical director at the Financial Times, has overseen her team’s migration of more than 150 microservices to Kubernetes. Ahead of this project, while creating an effective log aggregation system, Wells cited the need for selectively choosing metrics and named attributes that identify the event, along with all the surrounding occurrences happening as part of it. Correlating related services ensures that a system is designed to flag genuinely meaningful issues as they happen. In her recent talk at QCon, she also notes the importance of understanding rate limits when constructing your log aggregation. As she pointed out, when it comes to logs, you often don’t know if you’ve lost a record of something important until it’s too late. A great approach is to implement a process that turns any situation into a request. For instance, the next time your team finds itself looking for a piece of information it deems useful, don’t just fulfill the request, log it with your next team’s process review to see whether you can expand your reporting metrics.


How Ready Are You for a Ransomware Attack?

Setting the bar high enough to protect against initial entry is a laudable goal, but also adheres to the law of diminishing returns. This means the focus must shift towards improving how difficult it is for an attacker to move around your environment once they have gotten inside. This phase of the attack often requires some manual control, so identifying and disrupting command and control (C2) channels can pay significant dividends – but realize that only the least sophisticated attacker will reuse the same domains and IPs of a previous attack. So rather than looking for C2 communications via threat intel feeds, your approach needs to be to look for patterns of behavior which look like remote-access trojans (RATs) or hidden tunnels (suspicious forms of beaconing). Barriers to privilege escalation and lateral movement come down to cyber-hygiene related to patching (are there easily accessible exploits for local privilege escalation?), rights management (are accounts granted overly generous privileges?) and network segmentation (is it easy to traverse the network?). Most of the current raft of ransomware attacks have utilized the serial compromise of credentials to move from the initial point-of-entry to more useful parts of the network.


The rise and fall of merit

Wooldridge identifies Plato’s Republic as the origin of the concept of meritocracy, in which the Athenian philosopher imagined a society run by an intellectual elite, “who have the ability to think more deeply, see more clearly and rule more justly than anyone else.” Crucially, Plato’s ruling class was remade each generation—aristocrats were not assumed to pass on their talents—and it prized women as highly as men. Wooldridge finds meritocratic leanings in other pre-modern societies, including China, which began in the fifth century to use exams to recruit civil servants. But it was the expansion of the state in Europe in the early modern period that saw meritocracy first take root, albeit in a paradoxical way. As states expanded, demand for capable bureaucrats outgrew the ability of the aristocracy to produce them. The solution was to look downward and offer patronage to talented lowborns. Men such as French dramatist Jean Racine; London diarist Samuel Pepys; economist Adam Smith; and Henry VIII’s right-hand man, Thomas Cromwell, were all plucked from obscurity by favoritism. 


Intel Advances Architecture for Data Center, HPC-AI and Client Computing

This x86 core is not only the highest performing CPU core Intel has ever built, but it also delivers a step function in CPU architecture performance that will drive the next decade of compute. It was designed as a wider, deeper and smarter architecture to expose more parallelism, increase execution parallelism, reduce latency and increase general purpose performance. It also helps support large data and large code footprint applications. Performance-core provides a Geomean improvement of about 19%, across a wide range of workloads over our current 11th Gen Intel® Core™ architecture (Cypress Cove core) at the same frequency. Targeted for data center processors and for the evolving trends in machine learning, Performance-core brings dedicated hardware, including Intel's new Advanced Matrix Extensions (AMX), to perform matrix multiplication operations for an order of magnitude performance – a nearly 8x increase in artificial intelligence acceleration.1 This is architected for software ease of use, leveraging the x86 programing model.


A Soft, Wearable Brain–Machine Interface

Being both flexible and soft, the EEG scalp can be worn over hair and requires no gels or pastes to keep in place. The improved signal recording is largely down to the micro-needle electrodes, invisible to the naked eye, which penetrate the outermost layer of the skin. "You won't feel anything because [they are] too small to be detected by nerves," says Woon-Hong Yeo of the Georgia Institute of Technology. In conventional EEG set-ups, he adds, any motion like blinking or teeth grinding by the wearer causes signal degradation. "But once you make it ultra-light, thin, like our device, then you can minimize all of those motion issues." The team used machine learning to analyze and classify the neural signals received by the system and identify when the wearer was imagining motor activity. That, says Yeo, is the essential component of a BMI, to distinguish between different types of inputs. "Typically, people use machine learning or deep learning… We used convolutional neural networks." This type of deep learning is typically used in computer vision tasks such as pattern recognition or facial recognition, and "not exclusively for brain signals," Yeo adds. 


How to proactively defend against Mozi IoT botnet

While the botnet itself is not new, Microsoft’s IoT security researchers recently discovered that Mozi has evolved to achieve persistence on network gateways manufactured by Netgear, Huawei, and ZTE. It does this using clever persistence techniques that are specifically adapted to each gateway’s particular architecture. Network gateways are a particularly juicy target for adversaries because they are ideal as initial access points to corporate networks. Adversaries can search the internet for vulnerable devices via scanning tools like Shodan, infect them, perform reconnaissance, and then move laterally to compromise higher value targets—including information systems and critical industrial control system (ICS) devices in the operational technology (OT) networks. By infecting routers, they can perform man-in-the-middle (MITM) attacks—via HTTP hijacking and DNS spoofing—to compromise endpoints and deploy ransomware or cause safety incidents in OT facilities. In the diagram below we show just one example of how the vulnerabilities and newly discovered persistence techniques could be used together.


CBAP certification: A high-profile credential for business analysts

CBAP is the most advanced of IIBA’s core sequence of credentials for business analysts. It follows the Entry Certificate in Business Analysis (ECBA) and the Certification for Competency in Business Analysis (CCBA). As you might expect, the requirements get more extensive as you climb the ladder: CBAP requires more training, work experience, and knowledge area expertise. AdaptiveUS, a company that offers training for all of IIBA’s certs, breaks down the various requirements, but the important thing to know is that CBAP holders are at the top of the heap; while you don’t need to have the lower-level certs to get your CBAP certification, you should be fairly well established in your career as a BA before you consider it. Like IIBA’s other certs, the CBAP draws from A Guide to the Business Analysis Body of Knowledge, also known as the BABOK Guide. The BABOK Guide is a publication from IIBA that aims to serve as a bible for the business analysis industry, collecting best practices from real-world practitioners. It was first published in 2005 and is continuously updated. 


A Short Introduction to Apache Iceberg

Partitioning reduces the query response time in Apache Hive as data is stored in horizontal slices. In Hive partitioning, partitions are explicit and appear as a column and must be given partition values. Due to this approach, Hive having several issues like not being able to validate partition values is so fully dependent on the writer to produce the correct value, 100% dependent on the user to write queries correctly, Working queries are tightly coupled with the table’s partitioning scheme, so partitioning configuration cannot be changed without breaking queries, etc. Apache Iceberg introduces the concept of hidden partitioning where the reading of unnecessary partitions can be avoided automatically. Data consumers that fire the queries don’t need to know how the table is partitioned and add extra filters to their queries. Iceberg partition layouts can evolve as needed. Iceberg can hide partitioning because it does not require user-maintained partition columns. Iceberg produces partition values by taking a column value and optionally transforming it.



Quote for the day:

"Be willing to make decisions. That's the most important quality in a good leader." -- General George S. Patton, Jr.

Daily Tech Digest - August 20, 2021

Identity security: a more assertive approach in the new digital world

Perimeter-based security, where organisations only allow trusted parties with the right privileges to enter and leave doesn’t suit the modern digitalised, distributed environment of remote work and cloud applications. It’s just not possible to put a wall around a business that’s spread across multiple private and public clouds and on-premises locations. This has led to the emergence of approaches like Zero-Trust – an approach built on the idea that organisations should not automatically trust anyone or anything – and the growth of identity security as a discipline, which incorporates Zero-Trust principles at the scale and complexity required by modern digital business. Zero-Trust frameworks demand that anyone trying to access an organisation’s system is verified every time before granting access on a ‘least privilege’ basis, which is particularly useful in the context of the growing need to audit machine identities. Typically, they operate by collecting information about the user, endpoint, application, server, policies and all activities related to them and feeding it into a data pool which fuels machine learning (ML).


How Can We Make It Easier To Implant a Brain-Computer Interface?

As for implantable BCIs, so far there is only the Blackrock NeuroPort Array (Utah Array) implant, which also has the largest number of subjects implanted and the longest documented implantation times, and the Stentrode from Synchron, that has just recorded its first two implanted patients. The latter is essentially based on a stent that is inserted into the blood vessels in the brain and used to record EEG-type data (local field potentials (LFPs)). It is a very clever solution and surgical approach, and I do believe that it has great potential for a subset of use cases that do not require the high level of spatial and temporal resolution that our electrodes are offering. I am also looking forward to seeing the device’s long term performance. Our device records single unit action potentials (i.e., signals from individual neurons) and LFPs with high temporal and spatial resolution and high channel count, allowing significant spatial coverage of the neural tissue. It is implanted by a neurosurgeon who creates a small craniotomy (i.e., opens a small hole in the skull and dura), inserts the devices in the previously determined location by manually placing it in the correct area.


Artificial Intelligence (AI): 4 characteristics of successful teams

In most instances, AI pilot programs show promising results but then fail to scale. Accenture surveys point to 84 percent of C-suite executives acknowledging that scaling AI is important for future growth, but a whopping 76 percent also admit that they are struggling to do so. The only way to realize the full potential of AI is by scaling it across the enterprise. Unfortunately, some AI teams think only in terms of executing a workable prototype to establish proof-of-concept, or at best transform a department or function. Teams that think enterprise-scale at the design stage can go successfully from pilot to enterprise-scale production. They often build and work on ML-Ops platforms to standardize the ML lifecycle and build a factory line for data preparation, cataloguing, model management, AI assurance, and more. AI technologies demand huge compute and storage capacities, which often only large, sophisticated organizations can afford. Because resources are limited, AI access is privileged in most companies. This compromises performance because fewer minds mean fewer ideas, fewer identified problems, and fewer innovations.


Software Testing in the World of Next-Gen Technologies

If there is a technology that has gained momentum during the past decade, it is nothing other than artificial intelligence. AI offers the potential to mimic human tasks and improvise the operations through its own intellect, the logic it brings to business shows scope for productive inferences. However, the benefit of AI can only be achieved by feeding computers with data sets, and this needs the right QA and testing practices. As long as automation testing implementation needs to be done for deriving results, performance could only be achieved by using the right input data leading to effective processing. Moreover, the improvement of AI solutions is beneficial not only for other industries, but QA itself, since many of the testing and quality assurance processes depend on automation technology powered by artificial intelligence. The introduction of artificial intelligence into the testing process has the potential to enable smarter testing. So, the testing of AI solutions could enable software technologies to work on better reasoning and problem-solving capabilities.


What Makes Agile Transformations Successful? Results From A Scientific Study

The ultimate test of any model is to test it with every Scrum team and every organization. Since this is not practically feasible, scientists use advanced statistical techniques to draw conclusions about the population from a smaller sample of data from that population. Two things are important here. The first is that the sample must be big enough to reliably distinguish effects from the noise that always exists in data. The second is that the sample must be representative enough of the larger population in order to generalize findings to it. It is easy to understand why. Suppose that you’re tasked with testing the purity of the water in a lake. You can’t feasibly check every drop of water for contaminants. But you can sample some of the water and test it. This sample has to be big enough to detect contaminants and small enough to remain feasible. It's also possible that contaminants are not equally distributed across the lake. So it's a good idea to sample and test a bucket of water at various spots from the lake. This is effectively what happens here.


OAuth 2.0 and OIDC Fundamentals for Authentication and Authorization

The main goal of OAuth 2.0 is delegated authorization. In other words, as we saw earlier, the primary purpose of OAuth 2.0 is to grant an app access to data owned by another app. OAuth 2.0 does not focus on authentication, and as such, any authentication implementation using OAuth 2.0 is non-standard. That’s where OpenID Connect (OIDC) comes in. OIDC adds a standards-based authentication layer on top of OAuth 2.0. The Authorization Server in the OAuth 2.0 flows now assumes the role of Identity Server (or OIDC Provider). The underlying protocol is almost identical to OAuth 2.0 except that the Identity Server delivers an Identity Token (ID Token) to the requesting app. The Identity Token is a standard way of encoding the claims about the authentication of the user. We will talk more about identity tokens later. ... For both these flows, the app/client must be registered with the Authorization Server. The registration process results in the generation of a client_idand a client_secret which must then be configured on the app/client requesting authentication.


How Biometric Solutions Are Shaping Workplace Security

Today, the corporate world and biometric technology go hand in hand. Companies cannot operate seamlessly without biometrics. Regular security checks just don’t cut it in companies anymore. Since biometric technologies are designed specifically to offer the highest level of security, there is limited to no room when it comes to defrauding these systems. Thus, technologies like ID Document Capture, Selfie Capture, 3D Face Map Creation, etc., are becoming the best way to secure the workplace. Biometric technology allows for specific data collection. It doesn’t just reduce the risk of a data breach but also protects important data in offices. Whether it’s cards, passwords, documents, etc., biometric technology eliminates the need for such hackable security implementations at the workplace. All biometric data like fingerprints, facial mapping, and so on are extremely difficult to replicate. Certain biological characteristics don’t change with time, and that prevents authentication errors. Hence, there’s limited scope for identity replication or mimicry. Customized personal identity access control has become an employee’s right of sorts. 


How to avoid being left behind in today’s fast-paced marketplace

The ability to speed up processes and respond more quickly to a highly dynamic market is the key to survival in today’s competitive business environment. For many large businesses, the ERP system forms a crucial part of the digital core, which is supplemented by best-of-breed applications in areas such as customer experience, supply chain, and asset management. When it comes to digitalisation, organisations will often focus on these applications and the connections between them. However, we often see businesses forget to automate processes in the digital core itself — an oversight that can negatively impact other digitalisation efforts. For example, the ability to analyse demand trends on social media in the customer-focused application can offer valuable insights, but if it takes months for the product data needed to launch a new product variant to be accessed, customer trends are likely to have already moved on. If we look more closely at the process of launching a new product to market, this is a prime example of where digital transformation can be applied to help manufacturers remain agile and respond to market trends more quickly. 


FireEye, CISA Warn of Critical IoT Device Vulnerability

Kalay is a network protocol that helps devices easily connect to a software application. In most cases, the protocol is implemented in IoT devices through a software development kit that's typically installed by original equipment manufacturers. That makes tracking devices that use the protocol difficult, the FireEye researchers note. The Kalay protocol is used in a variety of enterprise IoT and connected devices, including security cameras, but also dozens of consumer devices, such as "smart" baby monitors and DVRs, the FireEye report states. "Because the Kalay platform is intended to be used transparently and is bundled as part of the OEM manufacturing process, [FireEye] Mandiant was not able to create a complete list of affected devices and geographic regions," says Dillon Franke, one of the three FireEye researcher who conducted the research on the vulnerability. FireEye's Mandiant Red Team first uncovered the vulnerability in 2020. If exploited, the flaw can allow an attacker to remotely control a vulnerable device, "resulting in the ability to listen to live audio, watch real-time video data and compromise device credentials for further attacks based on exposed device functionality," the security firm reports.


An Introduction to Blockchain

The distributed ledger created using blockchain technology is unlike a traditional network, because it does not have a central authority common in a traditional network structure. Decision-making power usually resides with a central authority, who decides in all aspects of the environment. Access to the network and data is subject to the individual responsible for the environment. The traditional database structure therefore is controlled by power. This is not to say that a traditional network structure is not effective. Certain business functions may best be managed by a central authority. However, such a network structure is not without its challenges. Transactions take time to process and cost money; they are not validated by all parties due to limited network participation, and they are prone to error and vulnerable to hacking. To process transactions in a traditional network structure also requires technical skills. In contrast, the distributed ledger is control by rules, not a central authority. The database is accessible to all the members of the network and installed on all the computers that use the database. Consensus between members is required to add transactions to the database.



Quote for the day:

"Nothing is less productive than to make more efficient what should not be done at all." -- Peter Drucker

Daily Tech Digest - August 17, 2021

It May Be Too Early to Prepare Your Data Center for Quantum Computing

The fact that there are multiple radically different approaches to quantum computing under development, with no assurance that any will meet market success (let alone market dominance), speaks to quantum computing's infancy. Merzbacher compares the situation to the early days of microprocessors, when there was a debate on whether computer chips should be made of silicon or germanium. "There were arguments for germanium. It's a better system for semiconductor computing in some sense, but it's expensive, not as easy to manufacture, and it's not as common, so in the end, it was silicon," she said. Quantum computing hasn't reached a point where "everybody settled on a technology here, and so there still is uncertainty. It may be that the IBM approach is better for certain types of computing, and then the trapped-ion approaches [are] better for others." This past March, IonQ became the first publicly traded pure-play quantum computing company via a SPAC merger. According to Merzbacher, the startup appears to have its eye on marketing rack-mounted quantum hardware to the data center market, although it hasn't voiced such intentions publicly.


Lucas Cavalcanti on Using Clojure, Microservices, Hexagonal Architecture ...

One thing to mention about the Cockburn Hexagonal Architecture, is that it was born into a Java object or entered word. And just to get a context. So what we use, it's not exactly that implementation. But it uses that idea as an inspiration. So I think on the Coburn's idea is you have a web server. And at every operation that web server is a port and you'll have the adapter, which a port that's an interface. And then the above adapter is the actual implementation of that interface. And the rest is how to implement the classes implement in that. The implementation, we use that idea of separating a port, that it's the communication with the external world from the adapter, which is the code that translate that communication to actual code that you can execute. And then the controller is the piece that gets that communication from the external world, and runs the actual business logic. I think the Cockburn definition stops at the controller. And after the controller, it's already business logic. Since we are working on Clojure and functional programming.


Excel 4, Yes Excel 4, Can Haunt Your Cloud Security

Scary? Sure, but still, how hard can it be to spot a macro attack? It’s harder than you might think. Vigna explained XLM makes it easy to create dangerous but obfuscated code. It started with trivial obfuscation methods. For example, the code was written hither and yon on and written using a white font on a white background. Kid’s stuff. But, later versions started using more sophisticated methods such as hiding by using the VeryHidden flag instead of Hidden. Users can’t unhide a VeryHidden flag from Excel. You must uncover VeryHidden data with a VBA script or even resort to a hex editor. How many Excel users will even know what a hex editor is, never mind use it? Adding insult to injury, Excel 4 doesn’t differentiate between code and data. So, yes what looks like data may be executed as code. It gets worse. Vigna added “Attackers may build the true payload one character at a time. They may add a time dependence, making the current day a decryption key for the code. On a wrong day, you’ll just see gibberish.” As VMware security researcher Stefano Ortolani added, Excel 4.0 macros are “easy to use but also easy to complicate.”


Agile Data Labeling: What it is and why you need it

The concept of Autolabeling, which consists of using an ML model to generate “synthetic” labels, has become increasingly popular in the most recent years, offering hope to those tired of the status quo, but is only one attempt at streamlining data labeling. The truth, though is, no single approach will solve all issues: at the center of autolabeling, for instance, is a chicken-and-egg problem. That is why the concept of Human-in-the-Loop labeling is gaining traction. That said, those attempts feel uncoordinated and bring little to no relief to companies who often struggle to see how those new paradigms apply to their own challenges. That’s why the industry is in need of more visibility and transparency regarding existing tools (a wonderful initial attempt at this is the TWIML Solutions Guide, though it’s not specifically targeted towards labeling solutions), easy integration between those tools, as well as an end-to-end labeling workflow that naturally integrates with the rest of the ML lifecycle. Outsourcing the process might not be an option for specialty use cases for which no third party is capable of delivering satisfactory results. 


Brain-computer interfaces are making big progress this year

The ability to translate brain activity into actions was achieved decades ago. The main challenge for private companies today is building commercial products for the masses that can find common signals across different brains that translate to similar actions, such as a brain wave pattern that means “move my right arm.” This doesn’t mean the engine should be able to do so without any fine tuning. In Neuralink’s MindPong demo above, the rhesus monkey went through a few minutes of calibration before the model was fine-tuned to his brain’s neural activity patterns. We can expect this routine to happen with other tasks as well, though at some point the engine might be powerful enough to predict the right command without any fine-tuning, which is then called zero-shot learning. Fortunately, AI research in pattern detection has made huge strides, specifically in the domains of vision, audio, and text, generating more robust techniques and architectures to enable AI applications to generalize. The groundbreaking paper Attention is all you need inspired many other exciting papers with its suggested ‘Transformer’ architecture. 


Here’s how hackers are cracking two-factor authentication security

Our experiments revealed a malicious actor can remotely access a user’s SMS-based 2FA with little effort, through the use of a popular app (name and type withheld for security reasons) designed to synchronize user’s notifications across different devices. Specifically, attackers can leverage a compromised email/password combination connected to a Google account (such as username@gmail.com) to nefariously install a readily available message mirroring app on a victim’s smartphone via Google Play. This is a realistic scenario since it’s common for users to use the same credentials across a variety of services. Using a password manager is an effective way to make your first line of authentication — your username/password login — more secure. Once the app is installed, the attacker can apply simple social engineering techniques to convince the user to enable the permissions required for the app to function properly. For example, they may pretend to be calling from a legitimate service provider to persuade the user to enable the permissions. After this, they can remotely receive all communications sent to the victim’s phone, including one-time codes used for 2FA.


Agile drives business growth, but culture is stifling progress

Senior leaders who invest in upskilling will ensure a culture of innovation in the enterprise. Skills needed today and in the future are identified and learning curves accelerated by providing immersive experiences to supplement learning. At Infosys, we categorize employees into different skill horizons based on workers’ core, digital, and emerging skills. For staying close to the customer through better insights, data is not just a lazy asset locked in systems of record — it is accessible through an end-to-end system that translates customer insights into action. Going further, artificial intelligence taps into unspoken team behaviors and interactions, which research from CB Insights found increases revenue by as much as 63%. Teams will also need to collaborate effectively and make decisions on their own. This will only happen if leaders understand when to guide and when to trust. In our research, we found that the most effective Agile firms (we call these “Sprinters”) are much more likely to foster servant leadership, along with the seven levers described.


Attackers Change Their Code Obfuscation Methods More Frequently

In an analysis posted last week, researchers at the Microsoft 365 Defender Threat Intelligence Team tracked one cybercriminal group's phishing campaign as the techniques changed at least 10 times over the span of a year. The campaign, dubbed XLS.HTML by the researchers, used plaintext, escape encoding, base64 encoding, and even Morse code, the researchers said. Changing up the encoding of attachments and data is not new, but highlights that attackers understand the need to add variation to avoid detection, the Microsoft researchers said. Microsoft's research is not the first to identify the extensive use of obfuscation. Such techniques are as old as malware itself, but more recently, attackers are switching up their obfuscation techniques more frequently. In addition, increasingly user-friendly tools used by cybercriminals intent on phishing make using sophisticated obfuscation much easier. Messaging security provider Proofpoint documented seven obfuscation techniques in a paper published five years ago, and even then, many of the obfuscation techniques were not new, the company said.


Navigating an asymmetrical recovery

The key for many businesses will be to build scenarios that account for a wider diffusion of results than was needed in the past. Take the cinema business as an example. Instead of sales projections being drawn up in a band between down-10% and up-10%, we’ve seen that some businesses can find themselves in a band between down-70% and up-80%. An unexpected upside sounds like a nice problem to have, but it also can create real operating challenges. Few of the companies whose growth was supercharged during the pandemic had a plan for that level of growth, which led to shortages, stock-outs, and delays that undermined performance. Planning for extremes is almost certain to be critical for some time to come. Although there is considerable liquidity overall in the debt markets, whether from traditional loans, bonds, or newer debt funds, companies’ ability to access these markets will vary widely. Regional and country differences in government support, along with variations in capital availability between companies of different sectors and size, are all creating additional asymmetries and unpredictable balance sheet pressures. 


Driving DevOps With Value Stream Management

A value stream, such as a DevOps pipeline, is simply the end-to-end set of activities that delivers value to our customers, whether internal or external to the organization. In an ideal state, work and information flow efficiently with minimal delays or queuing or work items. So far, this all sounds great. But good things seldom come easily. Let's start with the fact that there are hundreds of tools available to support a Dev(Sec)Ops toolchain. Moreover, it takes specific skills, effort, costs, and time to integrate and configure the tools selected by your organization. While software developers perform the integration effort, the required skills may differ from those available in your software development teams. Also, such work takes your developers away from their primary job of delivering value via software products for your internal and external customers. In short, asking your development teas to build their Dev(Sec)Ops toolchain configurations is a bit like asking manufacturing operators to build their manufacturing facilities. 



Quote for the day:

"Great leaders are almost always great simplifiers who can cut through argument, debate and doubt to offer a solution everybody can understand." -- General Colin Powell