Showing posts with label Deep Learning. Show all posts
Showing posts with label Deep Learning. Show all posts

Daily Tech Digest - August 27, 2025


Quote for the day:

"Success is the progressive realization of predetermined, worthwhile, personal goals." -- Paul J. Meyer


To counter AI cheating, companies bring back in-person job interviews

Google, Cisco and McKinsey & Co. have all re-instituted in-person interviews for some job candidates over the past year. “Remote work and advancements in AI have made it easier than ever for fake candidates to infiltrate the hiring process,” said Scott McGuckin, vice president of global talent acquisition at Cisco. “Identifying these threats is our priority, which is why we are adapting our hiring process to include increased verification steps and enhanced background checks that may involve an in-person component. ... AI has proven benefits for both job seekers and hiring managers/recruiters. Its use in the job search process grew 6.4% over the past year, while use in core tasks surged even higher, according to online employment marketplace ZipRecruiter. The share of job seekers using AI to draft and refine resumes jumped 39% over last year, while AI-assisted cover letter writing climbed 41%, and AI-based interview prep rose 44%, according to the firm. ... HR and hiring managers should insist on well-lit video interviews, watch for delays or mismatches, ask follow-up questions to spot AI use and verify resume details with background checks and geolocation data. “Some assessment or interview platforms can look at geolocation data, use this to ensure consistency with the resume and application,” Chiba said. 


How procedural memory can cut the cost and complexity of AI agents

Memories are built from an agent’s past experiences, or “trajectories.” The researchers explored storing these memories in two formats: verbatim, step-by-step actions; or distilling these actions into higher-level, script-like abstractions. For retrieval, the agent searches its memory for the most relevant past experience when given a new task. The team experimented with different methods, such vector search, to match the new task’s description to past queries or extracting keywords to find the best fit. The most critical component is the update mechanism. Memp introduces several strategies to ensure the agent’s memory evolves. ... One of the most significant findings for enterprise applications is that procedural memory is transferable. In one experiment, procedural memory generated by the powerful GPT-4o was given to a much smaller model, Qwen2.5-14B. The smaller model saw a significant boost in performance, improving its success rate and reducing the steps needed to complete tasks. According to Fang, this works because smaller models often handle simple, single-step actions well but falter when it comes to long-horizon planning and reasoning. The procedural memory from the larger model effectively fills this capability gap. This suggests that knowledge can be acquired using a state-of-the-art model, then deployed on smaller, more cost-effective models without losing the benefits of that experience.


AI Summaries a New Vector for Malware

The attack uses what researchers call "prompt overdose," a technique in which malicious instructions are repeated dozens of times within invisible HTML styled with properties such as zero opacity, white-on-white text, microscopic font sizes and off-screen positioning. When AI summarizers process this content, the repeated hidden text dominates the model's attention mechanisms, pushing legitimate visible content aside. "When processed by a summarizer, the repeated instructions typically dominate the model's context, causing them to appear prominently - and often exclusively - in the generated summary." ... Cybercriminals have been quick to adapt the technique to fool large language models rather than humans. The attack's effectiveness stems from user reliance on AI-generated summaries for quick content triage, often replacing manual review of original materials. Testing showed that the technique works across AI platforms, including commercial services like Sider.ai and custom-built browser extensions. Researchers also identified factors amplifying the attack's potential impact. Summarizers integrated into widely-used applications could enable mass distribution of social engineering lures across millions of users. The technique could lower technical barriers for ransomware deployment by providing non-technical victims with detailed execution instructions disguised as legitimate troubleshooting advice.


A scalable framework for evaluating health language models

While auto-eval techniques are well equipped to handle the increased volume of evaluation criteria, the completion of the proposed Precise Boolean rubrics by human annotators was prohibitively resource intensive. To mitigate such burden, we refined the Precise Boolean approach to dynamically filter the extensive set of rubric questions, retaining only the most pertinent criteria, conditioned on the specific data being evaluated. This data-driven adaptation, referred to as the Adaptive Precise Boolean rubric, enabled a reduction in the number of evaluations required for each LLM response. ... Current evaluation of LLMs in health often uses Likert scales. We compared this baseline to our data-driven Precise Boolean rubrics. Our results showed significantly higher inter-rater reliability using Precise Boolean rubrics, measured by intra-class correlation coefficients (ICC), compared to traditional Likert rubrics. A key advantage of our approach is its efficiency. The Adaptive Precise Boolean rubrics resulted in high inter-rater agreement of the full Precise Boolean rubric while reducing evaluation time by over 50%. This efficiency gain makes our method faster than even Likert scale evaluations, enhancing the scalability of LLM assessment. The fact that this also provides higher inter-rater reliability supports the argument that this simpler scoring also provides a higher quality signal.


Outdated Fraud Defenses Are a Green Light for Scammers Everywhere

Financial institutions get stuck in a reactive cycle, responding to breaches after the fact and relying heavily on network alerts and reissuing cards en masse to mitigate damage. That’s problematic on all fronts. It’s expensive, increases call center volume and fails to address the root problem. Beyond that, it disrupts the cardholder experience, putting the institution at risk of losing a cardholder’s trust and business. After experiencing a fraudulent attack, cardholders adjust their payment behaviors, regardless of whether the fraudster was successful or not. This could mean they stop using the affected card altogether, switch to a competitor’s product or close their account entirely. ... The tables are turned on the scammer. Instead of detecting fraud as it occurs, financial institutions now have up to 180 days’ lead time to identify a fraud pattern, take action and contain it. This strategic lead time enables early intervention, giving teams the ability to identify emerging fraud typologies, disrupt bad actor behavior patterns and contain the spread before widespread damage occurs. It shifts the institution’s playbook from defense to offense. It also eliminates the need to reissue thousands of cards preemptively, instead identifying small subsets of cardholders most likely to be impacted. Reissues happen only when absolutely necessary, which saves on cost and reputation management. 


SysAdmins: The First Responders of the Digital World

Unlike employees in other departments like sales, finance, marketing, and HR, who can typically log off at 5 p.m. and check out of work until the next morning, IT professionals carry the unique burden of having to be “always on.” For technology vendors in particular, this is especially prevalent; when situations arise that compromise the integrity of key systems and networks, both employees and users can face disruptions to cost organizations revenue and reputational damage. Whether it’s hardware or software issues, the system administrator is there to jump in and patch the issue. ... IT departments are increasingly viewed as “profit protectors,” critical to the bottom line by preventing unplanned expenses and customer churn. As demonstrated by the anecdotes above, system administrators ensure the daily functionality and operational resilience of their organizations, enabling every other team to do their job efficiently. Without system administrators’ constant attention to ensuring things behind the scenes are running smoothly, employees would struggle to fulfill their daily tasks every time an incident occurs. ... Business leaders can show appreciation for these employees by prioritizing mental health initiatives, ensuring IT teams are sufficiently staffed to prevent burnout, and promoting workload balance with generous time-off packages. 


A wake-up call for identity security in devops

The GitHub incident exposed what security teams already suspect—that devops is running headlong into an identity sprawl problem. Identities (human and non-human) are multiplying, permissions are stacking up, and third-party apps are the new soft underbelly. This is where identity security posture management (ISPM) steps in. ISPM takes the principles of cloud security posture management (CSPM)—continuous monitoring, posture scoring, risk-based controls—and applies them to identity. It doesn’t stop at who can log in; it extends into who has access, why they have it, what they can do, and how that access is granted, including via OAuth. ... Modern identity security platforms are stepping in to close this gap. The leading solutions give you deep visibility into the web of permissions spanning developers, service accounts, and third-party OAuth apps. It’s no longer enough to know that a token exists. Teams need full context: who issued the token, what scopes it has, what systems it touches, and how those privileges compare across environments. ... Developers aren’t asking for more security tools, policies, or friction. What they want is clarity, especially if it helps them stay out of the next breach postmortem. That’s why visibility-first approaches work. When security teams show developers exactly what access exists, and why it matters, the conversation shifts from “Why are you blocking me?” to “Thanks for the heads-up.”


"Think Big to Achieve Big": A CEO's advice to today's HR leaders

The traditional perception of HR as an administrative function is obsolete. Today's CHRO is a key driver of organisational transformation, working in close collaboration with the CEO to formulate and achieve overarching goals. This partnership is essential for ensuring that HR initiatives are not just about hiring, but about building a future-ready organisation. This involves enabling talent with the latest technologies, skills, and continuous learning opportunities. Goyal's own collaboration with his CHRO is a model of this integrated approach. They work together to ensure that HR initiatives are fully aligned with the Group's long-term objectives, a dynamic that goes far beyond traditional HR functions. This partnership is what drives sustainable growth and navigates complex challenges. The modern workplace presents a unique set of challenges, from heightened uncertainty to the distinct expectations of Gen Z. Goyal's response to this is a philosophy of active adaptation. To attract and retain young talent, he believes companies must be open to revisiting policies, embracing flexible working hours, and promoting a culture of continuous learning. He emphasises the need for leaders to have an open mindset toward the new generation, just as they would for their own children.


Inside a quantum data center

Quantum-focused measures that might need to be considered include vibrations, electromagnetic sensitivity, and potentially even the speed of the elevators moving hardware between floors. Whether or not there would be one standard encompassing the different types of quantum computers – supercooled, rack-based, optical-tabled etc – or multiple standards to suit all comers is unclear at this stage. ... IBM does also host some dedicated quantum systems at its facilities for customers who don’t want their QPUs on-site, but on-premise enterprise deployments are rare beyond the likes of IBM’s agreement with Cleveland Clinic. They will likely be the exception rather than the norm for enterprises for some time to come, IQM’s Goetz says. “Corporate enterprise customers are not yet buying full systems,” says Goetz. “They are usually accessing the systems through the cloud because they are still ramping up their internal capabilities with the goal to be ready once the quantum computers really have the full commercial value.” Quite what the geography of a world with commercially-useful quantum computers will look like is unclear. Will enterprises be happy with a few centralized ‘quantum cloud’ regions, demand in-country capacity in multiple jurisdictions, or go so far as demanding systems be placed in on-premise or colocated facilities?


Simpler models can outperform deep learning at climate prediction

The researchers see their work as a “cautionary tale” about the risk of deploying large AI models for climate science. While deep-learning models have shown incredible success in domains such as natural language, climate science contains a proven set of physical laws and approximations, and the challenge becomes how to incorporate those into AI models. “We are trying to develop models that are going to be useful and relevant for the kinds of things that decision-makers need going forward when making climate policy choices. While it might be attractive to use the latest, big-picture machine-learning model on a climate problem, what this study shows is that stepping back and really thinking about the problem fundamentals is important and useful,” says study senior author Noelle Selin ... “Large AI methods are very appealing to scientists, but they rarely solve a completely new problem, so implementing an existing solution first is necessary to find out whether the complex machine-learning approach actually improves upon it,” says Lütjens. Some initial results seemed to fly in the face of the researchers’ domain knowledge. The powerful deep-learning model should have been more accurate when making predictions about precipitation, since those data don’t follow a linear pattern. 

Daily Tech Digest - February 12, 2025


Quote for the day:

“If you don’t have a competitive advantage, don’t compete.” -- Jack Welch


Security Is Blocking AI Adoption: Is BYOC the Answer?

Enterprises face unique hurdles in adopting AI at scale. Sensitive data must remain within secure, controlled environments, avoiding public networks or shared infrastructures. Traditional SaaS models often fail to meet these stringent data sovereignty and compliance demands. Beyond this, organizations require granular control, comprehensive auditing and full transparency to trace every AI decision and data access. This ensures vendors cannot interact with sensitive data without explicit approval and documentation. These unmet needs create a significant gap, preventing regulated industries from deploying AI solutions while maintaining compliance and security. ... The concept of Bring Your Own Cloud (BYOC) isn’t new. It emerged as a middle ground between traditional SaaS and on-premises deployments, promising to combine the best of both worlds: the convenience of managed services with the control and security of on-premises infrastructure. However, its history in the industry has been marked by both successes and cautionary tales. Early BYOC implementations often failed to live up to their promises. Some vendors merely deployed their software into customer cloud accounts without proper architectural planning, resulting in what was essentially remotely managed on-premises environments. 


The Importance of Continuing Education in Data and Tech

Continuing education plays a vital role in workforce development and career advancement within the tech industries, where rapid technological advancements and evolving market demands necessitate a culture of lifelong learning. As businesses increasingly rely on sophisticated data analytics, artificial intelligence (AI), and cloud technologies, professionals in these fields must continuously update their skills to remain competitive. Continuing education offers a pathway for individuals to acquire new capabilities, adapt to emerging technologies, and gain proficiency in specialized areas that are in high demand. By engaging in ongoing learning opportunities, tech professionals can enhance their expertise, making them more valuable to their current employers and more attractive to potential future ones. ... Professional certifications and competency-based education have become significant avenues for career advancement in the data and tech field. As the landscape of technology rapidly evolves, organizations increasingly seek professionals who possess validated skills and up-to-date knowledge. Professional certifications serve as tangible proof of one’s expertise in specific areas such as data governance, analytics, cybersecurity, or cloud computing. These certifications, offered by leading industry bodies and tech companies, are designed to align with current industry standards and demands.


Agents, shadow AI and AI factories: Making sense of it all in 2025

“Agentic AI” promises “digital agents” that learn from us, and can perceive, reason problems out in multiple steps and then make autonomous decisions on our behalf. They can solve multilayered questions that require them to interact with many other agents, formulate answers and take actions. Consider forecasting agents in the supply chain predicting customer needs by engaging customer service agents, and then proactively adjusting warehouse stock by engaging inventory agents. Every knowledge worker will find themselves gaining these superhuman capabilities backed by a team of domain-specific task agent workers helping them tackle large complex jobs with less expended effort. ... However, the proliferation of generative, and soon agentic AI, presents a growing problem for IT teams. Maybe you’re familiar with “shadow IT,” where individual departments or users procure their own resources, without IT knowing. In today’s world we have “shadow AI,” and it’s hitting businesses on two fronts. ... Today’s enterprises create value through insights and answers driven by intelligence, setting them apart from their competitors. Just as past industrial revolutions transformed industries — think about steam, electricity, internet and later computer software — the age of AI heralds a new era where the production of intelligence is the core engine of every business. 


Is VMware really becoming the new mainframe?

“CIOs can start to unwind their dependence on VMware,” he says. “But they need to know it may not have any material reduction in their spend with Broadcom over multiple renewals. They’re going to have to get completely off Broadcom.” Still, Warrilow recommends that CIOs running VMware consider alternatives over the long term. They should also look for exit strategies for other market-dominant IT products they use, given that Broadcom has seen early success with VMware, he says. “The cautionary tale for CIOs is that this is just the beginning,” he says. “Every tech investment firm is going to be saying, ‘I want what Broadcom has with their share price.’  ... “The comparison works a bit, maybe from a stickiness perspective, because customers have built their applications and workload using virtualization technology on VMware,” he says. “When they have to do a mass refactoring of applications, it’s very, very hard.” But the analogy has its limitations because many users think of mainframes as a legacy technology, while VMware’s cloud-based products address future challenges, he adds. “The cloud is the future for running your AI workload,” Shenoy says. “Customers have trusted us for the last 20 to 25 years to run their business-critical applications, and the interesting part right now is we are seeing a lot of growth of these AI workloads and container workloads running on VMware.”


Deep Learning – a Necessity

It is essential in architecture that we realize that a skill set is not an arbitrary thing. It isn’t learn one skill and you are done. It also isn’t learn any skill from any background and you’re in. It is the application of all of the identified and necessary skills combined that makes a distinguished architect. It is also important to understand the purpose and context of mastery. Working in a startup is very different from working in a large corporation. Industry can change things significantly as well. Always remember that the profession’s purpose has to be paramount in the learning. For example, both doctors and lawyers have to deal with clients and need human interaction skills to be successful. Yet, the nature and implementation of these differ drastically. We will explore this point in a further article. However, do not underestimate the impact of changing the meaning of the profession while claiming similar skills. The current environment is rife with this kind of co-opting of the terminology and tools to alter the whole purpose of architecture fundamentally. ... In medicine and other professions, an individual studies and practices for 7+ years to become fully independent, and they never stop learning. This learning is tracked by both mentors and the profession. Because medicine is so essential to humans it is important that professionals are measured and constantly update and hone their competencies.


Crawl, then walk, before you run with AI agents, experts recommend

The best bet for percolating AI agents throughout the organization is to keep things as simple as possible. "Companies and employees that have already found ways to operationalize intelligent agents for simple tasks are best placed to exploit the next wave with agentic AI," said Benjamin Lee, professor of computer and information science at the University of Pennsylvania. "These employees would already be engaging generative AI for simple tasks and they would be manually breaking complex tasks into simpler tasks for the AI. Such employees would already be seeing productivity gains from using generative AI for these simple tasks." Rowan agreed that enterprises should adopt a crawl, walk, run approach: "Begin with a pilot program to explore the potential of multiagent systems in a controlled, measurable environment." "Most people say AI is at the toddler stage, whereas agentic AI is like a tween," said Ben Sapp, global practice lead of intelligence at Digital.ai. "It's functional and knows how to execute certain functions." Enterprises and their technology teams "should socialize the use of generative AI for simple tasks within their organizations," Lee continued. "They should have strategies for breaking complex tasks into simpler ones so that, when intelligent agents become a reality, the sources of productivity gains are transparent, easily understood, and trusted."


Growth of digital wallet use shaking up payment regulations and benefits delivery

Australian banks are calling on the government to pass legislation that accommodates payments with digital wallets within the country’s regulatory framework. A release from the Australian Banking Association (ABA) argues that with the country’s residents making $20 billion worth of payments across 500 million transactions each month with mobile wallets, all players within the payment ecosystem should be under the remit of the Reserve Bank of Australia. ... Digital wallets are by far the most popular method of making cross-border payments, according to a new report from Payments Cards & Mobile. The How Digital Wallets Are Transforming Cross-Border Transactions report shows digital wallets are chosen for international transactions by 42.1 percent. That makes them more people than the next two most popular methods, money transfer services (16.8 percent) and bank accounts (14.8 percent) combined. Transactions with digital wallets are much faster than wire transfers, are available to people who don’t possess bank accounts, and have lower fees than bank transfers, the report says. Interoperability remains a challenge, and regulations and infrastructure limitations could pose barriers to adoption, but the report authors only expect the dominance of digital wallets to increase in the years ahead.


My vision is to create a digital twin of our entire operations, from design and manufacturing to products and customers

We approach this transformation from three dimensions. First is empathy – truly understanding not just who our customers are, but their emotions. This is where the concept of creating a ‘digital twin’ of the customer comes in. Second is innovation – not just adopting new technologies but ensuring that our processes are lean, digitised, and seamless throughout the customer journey, from research to purchase, service, and brand loyalty. The goal is to provide a consistent and empathetic experience across all touchpoints.  ... The first challenge is identifying our customers. For example, if a distributor in one business also buys from another or if a consumer connects with one of our industrial projects, it’s hard to track. To address this, we launched a customer UID project, which has been in progress for months. It helps us identify customers across channels while keeping an eye on privacy and adhering to upcoming data protection regulations. The second part involves gathering all customer-related data in one place. Over the past three years, we unified all customer interactions into a single platform with a one CRM strategy, which was complex but essential. Now, with AI solutions like social listening combined with sentiment analysis, we can understand what our customers are saying about us and where we need to improve, both in India and globally. 


Will AI Chip Supply Dry Up and Turn Your Project Into a Costly Monster?

CIOs and other IT leaders face tremendous pressure to quickly develop GenAI strategies in the face of a potential supply shortage. With the cost of individual units, spending can easily reach into the multi-million-dollar range. But it wouldn’t be the first time companies have dealt with semiconductor shortages. During the COVID-19 pandemic, a spike in PC demand for remote work met with global shipping disruptions to create a chip drought that impacted everything from refrigerators to automobiles and PCs. “One thing we learned was the importance of supply chain resiliency, not being overly dependent on any one supplier and understanding what your alternatives are,” Hoecker says. “When we work with clients to make sure they have a more resilient supply chain, we consider a few things … One is making sure they rethink how much inventory do they want to keep for their most critical components so they can survive any potential shocks.” She adds, “Another is geographic resiliency, or understanding where your components come from and do you feel like you’re overly exposed to any one supplier or any one geography.” Nvidia’s GPUs, she notes, are harder to find alternatives for -- but other chips do have alternatives. “There are other places where you can dual-source or find more resiliency in your marketplace.”


WTF? Why the cybersecurity sector is overrun with acronyms

Imagine an organization is in the midst of a massive hack or security breach, and employees or clients are having to Google frantically to translate company emails, memos or crisis plans, slowing down the response. When these acronyms inevitably migrate into a cybersecurity company’s external marketing or communications efforts, they’re almost guaranteed to cause the general public to tune out news about issues and innovations that could have a far-reaching impact on how people live their lives and conduct their businesses. This is especially true as artificial intelligence (AI!) and machine learning (ML!) technologies expand and new acronyms emerge to keep pace with developments. Acronyms can also have unfortunate real-life connotations — point of sale, to name just one example. When shortened to POS, it can suggest something is… well, crappy. ... So, what’s behind the tendency to shorten terms to a jumble of often incomprehensible acronyms and abbreviations? “On the one hand, acronyms, abbreviations and jargon are used to achieve brevity, standardization and efficiency in communication, so if a profession is steeped in complex and technical language, it will likely be flowing with acronyms,” says Ian P. McCarthy, a professor of innovation and operations management at Simon Fraser University in Burnaby, British Columbia.

Daily Tech Digest - February 19, 2024

Why artificial general intelligence lies beyond deep learning

Decision-making under deep uncertainty (DMDU) methods such as Robust Decision-Making may provide a conceptual framework to realize AGI reasoning over choices. DMDU methods analyze the vulnerability of potential alternative decisions across various future scenarios without requiring constant retraining on new data. They evaluate decisions by pinpointing critical factors common among those actions that fail to meet predetermined outcome criteria. The goal is to identify decisions that demonstrate robustness — the ability to perform well across diverse futures. While many deep learning approaches prioritize optimized solutions that may fail when faced with unforeseen challenges (such as optimized just-in-time supply systems did in the face of COVID-19), DMDU methods prize robust alternatives that may trade optimality for the ability to achieve acceptable outcomes across many environments. DMDU methods offer a valuable conceptual framework for developing AI that can navigate real-world uncertainties. Developing a fully autonomous vehicle (AV) could demonstrate the application of the proposed methodology. The challenge lies in navigating diverse and unpredictable real-world conditions, thus emulating human decision-making skills while driving. 


Bouncing back from a cyber attack

In the case of a cyber attack, the inconceivable has already happened – all you can do now is bounce back. The big picture issue is that too often IoT (internet of things) networks are filled with bad code, poor data practices, lack of governance, and underinvestment in secure digital infrastructure. Due to the popularity and growth of IoT, manufacturers of IoT devices spring up overnight promoting products that are often constructed using lower-quality components and firmware, which can have sometimes well-known vulnerabilities exposed due to poor design and production practices. These vulnerabilities are then introduced to a customer environment increasing risk and possibly remaining unidentified. So, there’s a lot of work to do, including creating visibility over deep, widely connected networks with a plethora of devices talking to each other. All too often, IT and OT networks run on the same flat network. For these organisations, many are planning segmentation projects, but they are complex and disruptive to implement, so in the meantime companies want to understand what's going on in these environments and minimise disruption in the event of an attack.


Diversity, Equity, and Inclusion for Continuity and Resilience

As continuity professionals, the average age tends to skew older, so how do we continue to bring new people to the fold to ensure they feel like they can learn and be respected in the industry? Students need to be made aware this is an industry they can step into. Unfortunately, many already have experience seeing active shooter drills as the norm. They may have never organized one, but they have participated in many of these drills in school. Why not take advantage of that experience for the students who are interested in this field? Taking their advice could make exercising like active shooter or weather events less traumatic. Listening to their experience – doing it for at least 13 years – gives them a lot of insight from even Millennials who grew up at the forefront of school shootings, but not actively exercising what to do if it happens while in school. These future colleagues’ insights could change how we do specific exercises and events to benefit everyone. Still, there must be openness to new and fresh ideas and treating them with validity instead pushing them off due to their age and experience. Similarly, people with disabilities have always been vocal about their needs. 


AI’s pivotal role in shaping the future of finance in 2024 and beyond

As AI becomes more embedded in the financial fabric, regulators are crafting a nuanced framework to ensure ethical AI use. The Reserve Bank of India (RBI) and the Securities and Exchange Board of India (SEBI) have initiated guidelines for responsible AI adoption, emphasising transparency, accountability, and fairness in algorithmic decision-making processes. While the benefits are palpable, challenges persist. The rapid pace of AI integration demands a strategic approach to ensure a safe, financial eco-system ... The evolving nature of jobs due to AI necessitates a concerted effort towards upskilling the workforce. A McKinsey Global Institute report indicates that approximately 46% of India’s workforce may undergo significant changes in their job profiles due to automation and AI. To address this, collaborative initiatives between the government, educational institutions, and the private sector are imperative to equip the workforce with the requisite skills for the future. ... The Reserve Bank of India (RBI) and the Securities and Exchange Board of India (SEBI) have recognised the need for ethical AI use in the financial sector. Establishing clear guidelines and frameworks for responsible AI governance is crucial. 


How to proactively prevent password-spray attacks on legacy email accounts

Often with an ISP it’s hard to determine the exact location from which a user is logging in. If they access from a cellphone, often that geographic IP address is in a major city many miles away from your location. In that case, you may wish to set up additional infrastructure to relay their access through a tunnel that is better protected and able to be examined. Don’t assume the bad guys will use a malicious IP address to announce they have arrived at your door. According to Microsoft, “Midnight Blizzard leveraged their initial access to identify and compromise a legacy test OAuth application that had elevated access to the Microsoft corporate environment. The actor created additional malicious OAuth applications.” The attackers then created a new user account to grant consent in the Microsoft corporate environment to the actor-controlled malicious OAuth applications. “The threat actor then used the legacy test OAuth application to grant them the Office 365 Exchange Online full_access_as_app role, which allows access to mailboxes.” This is where my concern pivots from Microsoft’s inability to proactively protect its processes to the larger issue of our collective vulnerability in cloud implementations. 


How To Implement The Pipeline Design Pattern in C#

The pipeline design pattern in C# is a valuable tool for software engineers looking to optimize data processing. By breaking down a complex process into multiple stages, and then executing those stages in parallel, engineers can dramatically reduce the processing time required. This design pattern also simplifies complex operations and enables engineers to build scalable data processing pipelines. ...The pipeline design pattern is commonly used in software engineering for efficient data processing. This design pattern utilizes a series of stages to process data, with each stage passing its output to the next stage as input. The pipeline structure is made up of three components: The source: Where the data enters the pipeline; The stages: Each stage is responsible for processing the data in a particular way; The sink: Where the final output goes Implementing the pipeline design pattern offers several benefits, with one of the most significant benefits in efficiency of processing large amounts of data. By breaking down the data processing into smaller stages, the pipeline can handle larger datasets. The pattern also allows for easy scalability, making it easy to add additional stages as needed. 


Accuracy Improves When Large Language Models Collaborate

Not surprisingly, this idea of group-based collaboration also makes sense with large language models (LLMs), as recent research from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) is now showing. In particular, the study focused on getting a group of these powerful AI systems to work with each other using a kind of “discuss and debate” approach, in order to arrive at the best and most factually accurate answer. Powerful large language model AI systems, like OpenAI’s GPT-4 and Meta’s open source LLaMA 2, have been attracting a lot of attention lately with their ability to generate convincing human-like textual responses about history, politics and mathematical problems, as well as producing passable code, marketing copy and poetry. However, the tendency of these AI tools to “hallucinate”, or come up with plausible but false answers, is well-documented; thus making LLMs potentially unreliable as a source of verified information. To tackle this problem, the MIT team claims that the tendency of LLMs to generate inaccurate information will be significantly reduced with their collaborative approach, especially when combined with other methods like better prompt design, verification and scratchpads for breaking down a larger computational task into smaller, intermediate steps.


There's AI, and Then There's AGI: What You Need to Know to Tell the Difference

For starters, the ability to perform multiple tasks, as an AGI would, does not imply consciousness or self-will. And even if an AI had self-determination, the number of steps required to decide to wipe out humanity and then make progress toward that goal is too many to be realistically possible. "There's a lot of things that I would say are not hard evidence or proof, but are working against that narrative [of robots killing us all someday]," Riedl said. He also pointed to the issue of planning, which he defined as "thinking ahead into your own future to decide what to do to solve a problem that you've never solved before." LLMs are trained on historical data and are very good at using old information like itineraries to address new problems, like how to plan a vacation. But other problems require thinking about the future. "How does an AI system think ahead and plan how to eliminate its adversaries when there is no historical information about that ever happening?" Riedl asked. "You would require … planning and look ahead and hypotheticals that don't exist yet … there's this big black hole of capabilities that humans can do that AI is just really, really bad at."


Metaverse and the future of product interaction

As the metaverse continues to evolve, so must the approach to product design. This includes considering how familiar objects can be repurposed as functional interface elements in a virtual environment. Additionally, understanding the dynamics of group interactions in virtual spaces is crucial. Designers must anticipate these trends and adapt their designs accordingly, ensuring that products remain relevant and engaging in the ever-changing landscape of the metaverse. In India, the metaverse presents significant opportunities for businesses to redefine consumer experiences. It opens up possibilities for more interactive, personalised, and adventurous engagements with customers. This not only increases customer engagement and loyalty but also creates new avenues for value exchange and revenue streams. The metaverse, with its potential to impact diverse sectors like communications, retail, manufacturing, education, and banking, is poised to be a game-changer in the Indian market. ... As the metaverse continues to expand its reach and influence, businesses and designers in India and around the world must evolve to meet the demands of this new digital era.


Build trust to win out with genAI

Businesses need to adopt ‘responsible technology’ practices, which will give them a powerful lever that enables them to deploy innovative genAI solutions while building trust with consumers. Responsible tech is a philosophy that aligns an organization’s use of technology to both individuals’ and society’s interests. It includes developing tools, methodologies, and frameworks that observe these principles at every stage of the product development cycle. This ensures that ethical concerns are baked in at the outset. This approach is gaining momentum, as people realize how technologies such as genAI, can impact their daily lives. Even organizations such as the United Nations are codifying their approach to responsible tech. Consumers urgently want organizations to be responsible and transparent with their use of genAI. This can be a challenge because, when it comes to transparency, there are a multitude of factors to consider, including everything from acknowledging AI is being used to disclosing what data sources are used, what the steps were taken to reduce bias, how accurate the system is, or even the carbon footprint associated with the genAI system.



Quote for the day:

"Entrepreneurs average 3.8 failures before final success. What sets the successful ones apart is their amazing persistence." -- Lisa M. Amos

Daily Tech Digest - August 30, 2023

Generative AI Faces an Existential IP Reckoning of Its Own Making

Clearly, this situation is untenable, with a raft of dire consequences already beginning to emerge. Should the courts determine that generative AI firms aren’t protected by the fair use doctrine, the still-budding industry could be on the hook for practically limitless damages. Meanwhile, platforms like Reddit are beginning to aggressively push back against unchecked data scraping. ... These sorts of unintended externalities will only continue to multiply unless strong measures are taken to protect copyright holders. Government can play an important role here by introducing new legislation to bring IP laws into the 21st century, replacing outdated regulatory frameworks created decades before anyone could have predicted the rise of generative AI. Government can also spur the creation of a centralized licensing body to work with national and international rights organizations to ensure that artists, content creators, and publishers are being fairly compensated for the use of their content by generative AI companies.


6 hidden dangers of low code

The low-code sales pitch is that computers and automation make humans smarter by providing a computational lever that multiplies our intelligence. Perhaps. But you might also notice that, as people grow to trust in machines, we sometimes stop thinking for ourselves. If the algorithm says it’s the right thing to do, we'll just go along with it. There are endless examples of the disaster that can ensue from such thoughtlessness. ... When humans write code, we naturally do the least amount of work required, which is surprisingly efficient. We're not cutting corners; we're just not implementing unnecessary features. Low code solutions don’t have that advantage. They are designed to be one-size-fits-all, which in computer code means libraries filled with endless if-then-else statements testing for every contingency in the network. Low code is naturally less efficient because it’s always testing and retesting itself. This ability to adjust automatically is the magic that the sales team is selling, after all. But it’s also going to be that much less efficient than hand-tuned code written by someone who knows the business.


Applying Reliability Engineering to the Manufacturing IT Environment

To understand exposure to failure, the Reliability Engineers analyzed common failure modes across manufacturing operations, utilizing the Failure Mode and Effects Analysis (FMEA) methodology to anticipate potential issues and failures. Examples of common failure modes include “database purger/archiving failures leading to performance impact” and “inadequate margin to tolerate typical hardware outages.” The Reliability Engineers also identified systems that were most likely to cause factory impact due to risk from these shared failure modes. This data helped inform a Resiliency Maturity Model (RMM), which scores each common failure mode on a scale from 1 to 5 based on a system’s resilience to that failure mode. This structured approach enabled us to not just fix isolated examples of applications that were causing the most problems, but to instead broaden our impact and develop a reliability mindset. 


5 Skills All Marketing Analytics and Data Science Pros Need Today

Marketing analysts should hone their skills to know who to talk to – and how to talk to them – to secure the information they have. Trust Insights’ Katie Robbert says it requires listening and asking questions to understand what they know that you need to take back to your team, audience, and stakeholders. “You can teach anyone technical skills. People can follow the standard operating procedure,” she says. “The skill set that is so hard to teach is communication and listening.” ... By improving your communication skills, you’ll be well-positioned to follow Hou’s advice: “Weave a clear story in terms of how marketing data could and should guide the organization’s marketing team.” She says you should tell a narrative that connects the dots, explains the how and where of a return on investment, and details actions possible not yet realized due to limited lines of sight. ... Securing organization-wide support requires leaning into what the data can do for the business. “Businesspeople want to see the business outcomes. 


Neural Networks vs. Deep Learning

Neural networks, while powerful in synthesizing AI algorithms, typically require less resources. In contrast, as deep learning platforms take time to get trained on complex data sets to be able to analyze them and provide rapid results, they typically take far longer to develop, set up and get to the point where they yield accurate results. ... Neural networks are trained on data as a way of learning and improving their conclusions over time. As with all AI deployments, the more data it’s trained on the better. Neural networks must be fine-tuned for accuracy over and over as part of the learning process to transform them into powerful artificial intelligence tools. Fortunately for many businesses, plenty of neural networks have been trained for years – far before the current craze inspired by ChatGPT – and are now powerful business tools. ... Deep learning systems make use of complex machine learning techniques and can be considered a subset of machine learning. But in keeping with the multi-layered architecture of deep learning, these machine learning instances can be of various types and various strategies throughout a single deep learning application.


Ready or not, IoT is transforming your world

At its core, IoT refers to the interconnection of everyday objects, devices, and systems through the internet, enabling them to collect, exchange, and analyze data. This connectivity empowers us to monitor and control various aspects of our lives remotely, from smart homes and wearable devices to industrial machinery and city infrastructure. The essence of IoT lies in the seamless communication between objects, humans, and applications, making our environments smarter, more efficient, and ultimately, more convenient. ... Looking ahead, the future of IoT holds remarkable potential. Over the next five years, we can expect a multitude of advancements that will reshape industries and lifestyles. Smart cities will continue to evolve, leveraging IoT to enhance sustainability, security, and quality of life. The healthcare sector will witness even more personalized and remote patient monitoring, revolutionizing the way medical care is delivered. AI and automation will play a pivotal role, in driving efficiency and innovation across various domains.


What are network assurance tools and why are they important?

Without a network assurance tool at their disposal, many enterprises would be forced to limit their network reach and capacity. "They would be unable to take advantage of the latest technological advancements and innovations because they didn’t have the manpower or tools to manage them," says Christian Gilby, senior product director, AI-driven enterprise, at Juniper Networks. "At the same time, enterprises would be left behind by their competitors because they would still be utilizing manual, trial-and-error procedures to uncover and repair service issues." The popularity of network assurance technology is also being driven by a growing enterprise demand for network teams to do more with less. "Efficiency is needed in order to manage the ever-expanding network landscape," adds Gilby. New devices and equipment are constantly brought online and added to networks. Yet enterprises don’t have unlimited IT budgets, meaning that staffing levels often remain the same, even as workloads increase.


How tomorrow’s ‘smart cities’ will think for themselves

In the smart cities of the future, technology will be built to respond to human needs. Sustainability is the biggest problem facing cities – and by far the biggest contributor is the automobile. Smart cities will enable the move towards reducing traffic, and towards autonomous vehicles directed efficiently through the streets. Deliveries which are not successful the first time are one example. These are a key driver of congestion, as drivers have to return to the same address repeatedly. In a cognitive city, location data that shows when a customer is home can be shared anonymously with delivery companies – with their consent – so that more deliveries arrive on the first attempt. Smart parking will be another important way to reduce congestion and make the streets more efficient. Edge computing nodes will sense empty parking spaces and direct cars there in real-time. They will also be a key enabler for autonomous driving, delivering more data points to autonomous systems in cars. 


Navigating Your Path to a Career in Cyber Security: Practical Steps and Insights

Practical experience is critical in the field of cyber security. Seek opportunities to apply your knowledge and gain hands-on experience as often as you can. I recommend looking for internships, part-time jobs, or volunteer positions that allow you to work on real-world projects and develop practical skills. I cannot stress how important it is to understand the fundamentals. ... Networking is essential for finding job opportunities in any field, including cybersecurity. You should attend industry events and conferences (there are plenty of free ones) and try to meet as many professionals already working in the field as possible. Their insights will go a long way in your journey to finding the right role. There are also many online communities and forums you can join where cyber security experts gather to discuss trends, share knowledge, and explore job opportunities. Networking will help you gain insights, discover job openings, and even receive recommendations from industry professionals.


NCSC warns over possible AI prompt injection attacks

Complex as this may seem, some early developers of LLM-products have already seen attempted prompt injection attacks against their applications, albeit generally these have been either rather silly or basically harmless. Research is continuing into prompt injection attacks, said the NCSC, but there are now concerns that the problem may be something that is simply inherent to LLMs. This said, some researchers are working on potential mitigations, and there are some things that can be done to make prompt injection a tougher proposition. Probably one of the most important steps developers can take is to ensure they are architecting the system and its data flows so that they are happy with the worst-case scenario of what the LLM-powered app is allowed to do. “The emergence of LLMs is undoubtedly a very exciting time in technology. This new idea has landed – almost completely unexpectedly – and a lot of people and organisations (including the NCSC) want to explore and benefit from it,” wrote the NCSC team.



Quote for the day:

"When you practice leadership, the evidence of quality of your leadership, is known from the type of leaders that emerge out of your leadership" -- Sujit Lalwani

Daily Tech Digest - December 22, 2022

Data forecast for 2023: Time to extract more value

Using data effectively relies in large part on being able to properly manage and control how data is used. That's where data governance comes into play, with tools and technologies that help organizations govern the data they use. Data governance will have an expanded role in 2023, according to Eckerson Research analyst Kevin Petrie. There will be a growing use of ML technologies to improve data governance technology by helping to automate processes and policies for data. Petrie said he also expects a rising number of data governance platforms to help organize, document and apply policies to ML models alongside other data assets in 2023. Benefitting from data to improve business outcomes entails collecting product and service data. That's where the concept of data as a product -- also referred to as data product -- will have growing relevance in 2023. Barr Moses, CEO of data observability vendor Monte Carlo, predicted that nearly every product will become a data product as organizations seek to optimize operations. "In 2023, more and more companies will seek to integrate ways to track and monetize data generated by their products as part of their core offerings to drive competitive advantage," Moses said.


The Future of Skills: Preparing for Industry 4.0 and Beyond

Industry 4.0—Industrial Internet of Things or the 4th Industrial revolution, as it is popularly addressed—has arrived with lots of opportunities and challenges that have the potential to transform the marketplace completely. Industry 4.0 refers to the “smart” and connected production systems that are designed to sense, predict and interact with the physical world so as to make decisions that support production in real-time, increasing productivity, energy efficiency and sustainability. McKinsey estimates that IoT has the potential to unlock an economic value somewhere between US$5.5 to $12.6 trillion by 2030. Therefore, with so many changes happening so quickly, neither employers nor employees (both employed and yet to be employed) can afford to ignore them or to stay in their comfort zone following the same old practices or skills. A report by World Economic Forum states that 84 percent of employers are set to rapidly digitalize working processes with the potential to move 44 percent of their workforce to operate remotely, and the top skills needed as we lead up to 2025 are critical thinking and analysis, problem solving, active learning, resilience, stress tolerance and flexibility.


What is DataOps? Collaborative, cross-functional analytics

Enterprises today are increasingly injecting machine learning into a vast array of products and services and DataOps is an approach geared toward supporting the end-to-end needs of machine learning. “For example, this style makes it more feasible for data scientists to have the support of software engineering to provide what is needed when models are handed over to operations during deployment,” Ted Dunning and Ellen Friedman write in their book, Machine Learning Logistics. “The DataOps approach is not limited to machine learning,” they add. “This style of organization is useful for any data-oriented work, making it easier to take advantage of the benefits offered by building a global data fabric.” ... Because DataOps builds on DevOps, cross-functional teams that cut across “skill guilds” such as operations, software engineering, architecture and planning, product management, data analysis, data development, and data engineering are essential, and DataOps teams should be managed in ways that ensure increased collaboration and communication among developers, operations professionals, and data experts.


Amplified security trends to watch out for in 2023

Cybercriminals target employees across different industries to surreptitiously recruit them as insiders, offering them financial enticements to hand over company credentials and access to systems where sensitive information is stored. This approach isn’t new, but it is gaining popularity. A decentralized work environment makes it easier for criminals to target employees through private social channels, as the employee does not feel that they are being watched as closely as they would in a busy office setting. Aside from monitoring user behavior and threat patterns, it’s important to be aware of and be sensitive about the conditions that could make employees vulnerable to this kind of outreach – for example, the announcement of a massive corporate restructuring or a round of layoffs. Not every employee affected by a restructuring suddenly becomes a bad guy, but security leaders should work with Human Resources or People Operations and people managers to make them aware of this type of criminal scheme, so that they can take the necessary steps to offer support to employees who could be affected by such organizational or personal matters.


How deep learning will ignite the metaverse in 2023 and beyond

Currently, the digital realities being developed by different companies have their own attributes and integrated functionalities, and are at different development levels. Many of these multiverse platforms are expected to converge, and this junction is where AI and data science domains, such as deep learning, will be critical in taking users to a new stage in their metaverse journey. Success in these endeavors will be contingent upon understanding vital elements of the algorithmic models and their metrics. Deep learning-based software is already being integrated into virtual worlds; some examples include autonomously driving chatbots and other forms of natural language processing to ensure seamless interactions. For another example, in AR technology, deep learning-enabled AI is used in camera pose estimation, immersive rendering, real-world object detection and 3D object reconstruction, helping to guarantee the variety and usability of AR applications. ... “Companies have an interesting opportunity for their customers and community to interact with their brand(s) in new and exciting ways, and deep learning-based artificial intelligence plays a major role in facilitating those experiences,” said Stephenson.


Introducing Cadl: Microsoft’s concise API design language

Microsoft has begun to move much of its API development to a language called Cadl, which helps you define API structures programmatically before compiling to OpenAPI definitions. The intent is to do for APIs what Bicep does for infrastructure, providing a way to repeatably deliver API definitions. By abstracting design away from definition, Cadl can deliver much more concise outputs, ensuring that the OpenAPI tool in platforms like Visual Studio can parse it quickly and efficiently. What is Cadl? At first glance it’s a JavaScript-like language with some similarities to .NET languages. Microsoft describes it as “TypeScript for APIs,” intending it to be easy to use for anyone familiar with C#. Like Microsoft’s other domain-specific languages, Cadl benefits from Microsoft’s long history as a development tools company, fitting neatly into existing toolchains. You can even add Cadl extensions to the language server in Visual Studio and Visual Studio Code, ensuring that you get support from built-in syntax highlighting, code completion, and linting. Making Cadl a language makes a lot of sense; it allows you to encapsulate architectural constraints into rules and wrap common constructs in libraries. 


CIOs in 2023: Guiding Business Strategies Through Data-Driven Decisions

“CIOs need to take on a data mindset by first understanding the data, and then determining how critical the data architecture and data governance is,” he says. For understanding the business process, they need to think about how they can move the needle for the company, prioritize the projects that drive business, and implement or evolve the systems they already have. “The third important thing is building business partnerships across the organization,” Kancharla adds. “Having all levels of relationships will go a long way for the CIOs to be successful. The last thing is really thinking of what optimizations they can bring to the company, especially next year.” He points out that next year, every company will have to bring down costs, which means streamlining and optimizing the software within the company and deploying the tools they already have to the full potential. Segovia adds effective CIOs must also be able to understand the tech and recommendations their teams are executing on. “They need to understand areas in a reasonably deep manner in order to lead teams of wide technical and digital acumen,” he says.


Social media use can put companies at risk: Here are some ways to mitigate the danger

The concern is that foreign-owned applications might share the information they collect with government intelligence agencies. That information includes personally identifiable information, keystroke patterns (PII), location information based on SIM card or IP address, app activity, browser and search history, and biometric information. Personal use of social media by employees can impact the company’s brand as well as endanger the firm or employees themselves—bad actors could use social media to identify where a person works, the division in which they work, and possibly their physical location. The potential harm is higher for high-risk employees such as senior executives or those with authority to execute financial transactions. Of course, there are plenty of good reasons for employees to use social media. It can enhance marketing campaigns, announce news or critical information, and otherwise raise the profile of an organization. Social media channels can be used to monitor risks and threats against a government or critical infrastructure. 


The power of generosity in ecosystems

A traditional approach to competition, rooted in the business mindset of one company gaining an advantage over another, can make it difficult to play in an ecosystem as a participant. For example, one of the risks of being part of an ecosystem is the dependency on its orchestrator. Increased reliance on Big Tech and the consolidation of many industries have created an increased risk of a few powerful cash-generator businesses that need to reward shareholders with consistent, attractive margins and will not think twice about burdening their partners to keep those margins—for example, by asking for discounts in exchange for participating in the ecosystem. But what if there was more of a sense of mutual collaboration? Benjamin Gomes-Casseres of Brandeis University has published research with Harvard Business Review Press on different business combinations (his term for business ecosystems). He states that for an ecosystem to logically exist, the players within an ecosystem must fairly share the benefits, creating added value for the entire ecosystem that exceeds the level of value each company could create independently.


6 BI challenges IT teams must address

There can be obstacles, however, to taking the self-service approach. Having too much access across many departments, for example, can result in a kitchen full of inexperienced cooks running up costs and exposing the company to data security problems. And do you want your sales team making decisions based on whatever data it gets, and having the autonomy to mix and match to see what works best? Central, standardized control over tool rollout is key. And to do it correctly, IT needs to govern the data well. Because of these tradeoffs, organizations must ensure they select the BI approach best-suited for the business application at hand. “We have more than 100,000 associates in addition to externals working for us, and that’s quite a large user group to serve,” says Axel Goris, global visual analytics lead at Novartis, the multinational pharmaceutical corporation based in Basel, Switzerland. “A key challenge was organization around delivery — how do you organize delivery, because a pharmaceutical company is highly regulated.” An IT-managed BI delivery model, Goris explains, requires a lot of effort and process, which wouldn’t work for some parts of the business.



Quote for the day:

"Nothing so conclusively proves a man's ability to lead others as what he does from day to day to lead himself." -- Thomas J. Watson

Daily Tech Digest - August 31, 2022

Beyond “Agree to Disagree”: Why Leaders Need to Foster a Culture of Productive Disagreement and Debate

The business imperative of nurturing a culture of productive disagreement is clear. The good news is that senior leaders can play a highly influential role in this regard. By integrating the concepts of openness and healthy debate into their own and their organization’s language they can institutionalize new norms. Their actions can help to further reset the rules of engagement by serving as a model for employees to follow. ... Leaders should incorporate the concept of productive debate into corporate value statements and the way they address colleagues, employees, and shareholders. Michelin, for example, built debate into its value statement. One of its organizational values is “respect for facts,” which it describes as follows: “We utilize facts to learn, honestly challenge our beliefs….” Another company that espouses debate as value is Bridgewater. Founder Ray Dalio ingrained principles and subprinciples such as “be radically open-minded” and “appreciate the art of thoughtful disagreement” in the investment management company’s culture.


Using technology to power the future of banking

Because I believe that anyone that wants to be a CIO or a CTO, particularly in the way that the industry is progressing, you need to understand technology. So, staying close to the technology and curious and wanting to solve those problems has helped me. But there's another part to it, too. In every one of my roles, there have been times when I've seen something that wasn't necessarily working and I had ideas and wanted to help, but it might’ve been outside of my responsibility. I've always leaned in to help, even though I knew that it was going to help someone else in the organization, because it was the right thing to do and it helped the company, it helped other people. So, it ended up building stronger relationships, but also building my skillset. I think that's been a part of my rise too, and it's something that's just incredibly powerful from a cultural perspective. That’s something that I love here. Everybody is in it together to work that way. But I also think that it just speaks volumes about an individual, and people gravitate to want to work with people that operate that way. 


Physics breakthrough could lead to new, more efficient quantum computers

According to the researchers, this technique for generating stable qubits could have massive implications for the entire field of quantum computing, but especially for scalability and noise-reduction: At this stage, our system faces mostly technical limitations, such as optical losses, finite cooperativity and imperfect Raman pulses. Even modest improvements in these respects would put us within reach of loss and fault tolerance thresholds for quantum error correction. It’ll take some time to see how well this experimental generation of qubits translates into an actual computation device, but there’s plenty of reason to be optimistic. There are numerous different methods by which qubits can be made, and each lends to its own unique machine architecture. The upside here is that the scientists were able to generate their results with a single atom. This indicates that the technique would be useful outside of computing. If, for example, it could be developed into a two-atom system, it could lead to a novel method for secure quantum communication.


Organizations security: Highlighting the importance of compliant data

When choosing a web data collection platform or network, it’s important that security professionals use a compliance-driven service provider to safeguard the integrity of their network and operations. Compliant data collection networks ensure that security operators have a safe and suitable environment in which to perform their work without being compromised by potential bad actors using the same network or proxy infrastructure. These data providers institute extensive and multifaceted compliance processes that include a number of internal as well as external procedures and safeguards, such as manual reviews and third-party audits, to identify non-compliant active patterns and ensure that all use of the network follows the overall compliance guidelines. This of course also includes abiding by the data gathering guidelines established by international regulators, such as the European Union and the US State of California, as well as enforcing others who follow public web scraping best practices for compliant and reliable web data scraping or collection.


TensorFlow, PyTorch, and JAX: Choosing a deep learning framework

It’s not like TensorFlow has stood still for all that time. TensorFlow 1.x was all about building static graphs in a very un-Python manner, but with the TensorFlow 2.x line, you can also build models using the “eager” mode for immediate evaluation of operations, making things feel a lot more like PyTorch. At the high level, TensorFlow gives you Keras for easier development, and at the low-level, it gives you the XLA optimizing compiler for speed. XLA works wonders for increasing performance on GPUs, and it’s the primary method of tapping the power of Google’s TPUs (Tensor Processing Units), which deliver unparalleled performance for training models at massive scales. Then there are all the things that TensorFlow has been doing well for years. Do you need to serve models in a well-defined and repeatable manner on a mature platform? TensorFlow Serving is there for you. Do you need to retarget your model deployments for the web, or for low-power compute such as smartphones, or for resource-constrained devices like IoT things? TensorFlow.js and TensorFlow Lite are both very mature at this point. 


IoT Will Power Itself – Power Electronics News

Energy harvesting is nothing new, with solar power being one of the most famous examples. Solar energy works well for powering parking meters, but if we’re going to bring online the packaging and containers that are at the heart of our supply chains—things that are indoors and stacked on top of each other—we need another solution. The technology that gives mundane things like transporting cash registers both their intelligence and energy-harvesting power are small, inexpensive, brand-size computers printed as stickers and affixed to cash registers, sweater tags, vaccine vials, or other items racing in the global supply chain. These sticker tags, called IoT Pixels, include an ARM processor, a Bluetooth radio, sensors, and a security module — basically a complete system-on-a-chip (SoC). All that remains is to power this tiny SoC in the most efficient and economical way possible. It turns out that as wireless networks permeate our lives and radio frequency (RF) activity is everywhere, the prospect of recycling that RF activity into energy is the most practical and ubiquitous solution.


CoAuthor: Stanford experiments with human-AI collaborative writing

CoAuthor is based on GPT-3, one of the recent large language models from OpenAI, trained on a massive collection of already-written text on the internet. It would be a tall order to think a model based on existing text might be capable of creating something original, but Lee and her collaborators wanted to see how it can nudge writers to deviate from their routines—to go beyond their comfort zone (e.g., vocabularies that they use daily)—to write something that they would not have written otherwise. They also wanted to understand the impact such collaborations have on a writer’s personal sense of accomplishment and ownership. “We want to see if AI can help humans achieve the intangible qualities of great writing,” Lee says. Machines are good at doing search and retrieval and spotting connections. Humans are good at spotting creativity. If you think this article is written well, it is because of the human author, not in spite of it. ... The goal, Lee says, was not to build a system that can make humans write better and faster. Instead, it was to investigate the potential of recent large language models to aid in the writing process and see where they succeed and fail. 


LastPass source code breach – do we still recommend password managers?

The breach itself actually happened two weeks before that, the company said, and involved attackers getting into the system where LastPass keeps the source code of its software. From there, LastPass reported, the attackers “took portions of source code and some proprietary LastPass technical information.” We didn’t write this incident up last week, because there didn’t seem to be a lot that we could add to the LastPass incident report – the crooks rifled through their proprietary source code and intellectual property, but apparently didn’t get at any customer or employee data. In other words, we saw this as a deeply embarrassing PR issue for LastPass itself, given that the whole purpose of the company’s own product is to help customers keep their online accounts to themselves, but not as an incident that directly put customers’ online accounts at risk. However, over the past weekend we’ve had several worried enquiries from readers (and we’ve seen some misleading advice on social media), so we thought we’d look at the main questions that we’ve received so far.


FBI issues alert over cybercriminal exploits targeting DeFi

The FBI observed cybercriminals exploiting vulnerabilities in smart contracts that govern DeFi platforms in order to steal investors’ cryptocurrency. In a specific example, the FBI mentioned cases where hackers used a “signature verification vulnerability” to plunder $321 million from the Wormhole token bridge back in February. It also mentioned a flash loan attack that was used to trigger an exploit in the Solana DeFi protocol Nirvana in July. However, that’s just a drop in a vast ocean. According to an analysis from blockchain security firm CertiK, since the start of the year, over $1.6 billion has been exploited from the DeFi space, surpassing the total amount stolen in 2020 and 2021 combined. While the FBI admitted that “all investment involves some risk,” the agency has recommended that investors research DeFi platforms extensively before use and, when in doubt, seek advice from a licensed financial adviser. The agency said it was also very important that the platform's protocols are sound and to ensure they have had one or more code audits performed by independent auditors.


Privacy and security issues associated with facial recognition software

Facial recognition technology in surveillance has improved dramatically in recent years, meaning it is quite easy to track a person as they move about a city, he said. One of the privacy concerns about the power of such technology is who has access to that information and for what purpose. Ajay Mohan, principal, AI & analytics at Capgemini Americas, agreed with that assessment. “The big issue is that companies already collect a tremendous amount of personal and financial information about us [for profit-driven applications] that basically just follows you around, even if you don’t actively approve or authorize it,” Mohan said. “I can go from here to the grocery store, and then all of a sudden, they have a scan of my face, and they’re able to track it to see where I’m going.” In addition, artificial intelligence (AI) continues to push the capabilities of facial recognition systems in terms of their performance, while from an attacker perspective, there is emerging research leveraging AI to create facial “master keys,” that is, AI generation of a face that matches many different faces, through the use of what’s called Generative Adversarial Network techniques, according to Lewis.



Quote for the day:

"If you don't demonstrate leadership character, your skills and your results will be discounted, if not dismissed." -- Mark Miller