Showing posts with label virtual reality. Show all posts
Showing posts with label virtual reality. Show all posts

Daily Tech Digest - March 01, 2025


Quote for the day:

"Your life does not get better by chance, it gets better by change." -- Jim Rohn


Two AI developer strategies: Hire engineers or let AI do the work

Philip Walsh, director analyst in Gartner’s software engineering practice, said that from his vantage point he sees “two contrasting signals: some leaders, like Marc Benioff at Salesforce, suggest they may not need as many engineers due to AI’s impact, while others — Alibaba being a prime example — are actively scaling their technical teams and specifically hiring for AI-oriented roles.” In practice, he said, Gartner believes AI is far more likely to expand the need for software engineering talent. “AI adoption in software development is early and uneven,” he said, “and most large enterprises are still early in deploying AI for software development — especially beyond pilots or small-scale trials.” Walsh noted that, while there is a lot of interest in AI-based coding assistants (Gartner sees roughly 80% of large enterprises piloting or deploying them), actual active usage among developers is often much lower. “Many organizations report usage rates of 30% or less among those who have access to these tools,” he said, adding that the most common tools are not yet generating sufficient productivity gains to generate cost savings or headcount reductions. He said, “current solutions often require strong human supervision to avoid errors or endless loops. Even as these technologies mature over the next two to three years, human expertise will remain critical.”


The Great AI shift: The rise of ‘services as software’

Today, AI is pushing the envelope by turning services built to be used by humans as ‘self-serve’ utilities into automatically-running software solutions that execute autonomously—a paradigm shift the venture capital world, in particular, has termed ‘Services as Software’ ... The shift is already conspicuous across industries. AI tools like Harvey AI are transforming the legal and compliance sector by analysing case law and generating legal briefs, essentially replacing human research assistants. The customer support ecosystem that once required large human teams in call centres now handles significant query volumes daily with AI chatbots and virtual agents. ... The AI-driven shift brings into question the traditional notion of availing an ‘expert service’. Software development,legal, and financial services are all coveted industries where workers are considered ‘experts’ delivering specialised services. The human role will undergo tremendous redefinition and will require calibrated re-skilling. ... Businesses won't simply replace SaaS with AI-powered tools; they will build the company's processes and systems around these new systems. Instead of hiring marketing agencies, companies will use AI to generate dynamic marketing and advertising campaigns. Businesses will rely on AI-driven quality assurance and control instead of outsourcing software testing, Quality Assurance, and Quality Control.


Resilience, Observability and Unintended Consequences of Automation

Instead of thinking of replacing work that humans might make or do, it's augmenting that work. And how do we make it easier for us to do these kinds of jobs? And that might be writing code, that might be deploying it, that might be tackling incidents when they come up, but understanding what the fancy, nerdy academic jargon for this is joint cognitive systems. But thinking instead of replacement or our functional allocation, another good nerdy academic term, we'll give you this piece, we'll give the humans those pieces. How do we have a joint system where that automation is really supporting the work of the humans in this complex system? And in particular, how do you allow them to troubleshoot that, to introspect that, to actually understand and to have even maybe the very nerdy versions of this research lay out possible ways of thinking about what can these computers do to help us? ... We could go monolith to microservices, we could go pick your digital transformation. How long did that take you? And how much care did you put into that? Maybe some of it was too long or too bureaucratic or what have you, but I would argue that we tend to YOLO internal developer technology way faster and way looser than we do with the things that actually make us money as that is the perception, the things that actually make us money.


The Modern CDN Means Complex Decisions for Developers

“Developers should not have to be experts on how to scale an application; that should just be automatic. But equally, they should not have to be experts on where to serve an application to stay compliant with all these different patchworks of requirements; that should be more or less automatic,” Engates argues. “You should be able to flip a few switches and say ‘I need to be XYZ compliant in these countries,’ and the policy should then flow across that network and orchestrate where traffic is encrypted and where it’s served and where it’s delivered and what constraints are around it.” ... Along with the physical constraint of the speed of light and the rise of data protection and compliance regimes, Alexander also highlights the challenge of costs as something developers want modern CDNs to help them with. “Egress fees between clouds are one of the artificial barriers put in place,” he claims. That can be 10%, 20% or even 30% of overall cloud spend. “People can’t build the application that they want, they can’t optimize, because of some of these taxes that are added on moving data around.” Update patterns aren’t always straightforward either. Take a wiki like Fandom, where Fastly founder and CTO Artur Bergman was previously CTO. 


A Comprehensive Look at OSINT

Cybersecurity professionals within corporations rely on public data to identify emerging phishing campaigns, data breaches, or malicious activity targeting their brand. Investigative journalists and academic researchers turn to OSINT for fact-checking, identifying new leads, and gathering reliable support for their reporting or studies. ... Avoiding OSINT or downplaying its value can leave organizations unaware of threats and opportunities that are readily discoverable to others. By failing to gather open-source data, businesses and government agencies could remain in the dark about malicious activities, negative brand impersonations, or stolen credentials circulating on forums and dark web marketplaces. In the event of a security breach or public scandal, stakeholders may view the lack of proper OSINT measures as a failure of due diligence, eroding trust and tarnishing the organization’s image. ... The primary driver behind OSINT’s growth is the vast reservoir of information generated daily by digital platforms, databases, and news outlets. This public data can be invaluable for enhancing security, improving transparency, and making more informed decisions. Security professionals, for instance, can preemptively identify threats and vulnerabilities posted openly by malicious actors. 


OT/ICS cyber threats escalate as geopolitical conflicts intensify

A persistent lack of visibility into OT environments continues to obscure the full scale of these attacks. These insights come from Dragos’ 2025 OT/ICS Cybersecurity Report, its eighth annual Year in Review, which analyzes industrial organizations’ cyber threats. .., VOLTZITE is arguably the most crucial threat group to track in critical infrastructure. Due to its dedicated focus on OT data, the group is a capable threat to ICS asset owners and operators. This group shares extensive technical overlaps with the Volt Typhoon threat group tracked by other organizations. It utilizes the same techniques as in previous years, setting up complex chains of network infrastructure to target, compromise, and steal compromising OT-relevant data—GIS data, OT network diagrams, OT operating instructions, etc.—from victim ICS organizations. ... Increasing collaboration between hacktivist groups and state-backed cyber actors has led to a hybrid threat model where hacktivists amplify state objectives, either directly or through shared infrastructure and intelligence. State actors increasingly look to exploit hacktivist groups as proxies to conduct deniable cyber operations, allowing for more aggressive attacks with reduced attribution risks.


Leveraging AR & VR for Remote Maintenance in Industrial IoT

AR tools like Microsoft’s HoloLens 2 are enabling workers on-site to receive real-time guidance from experts located anywhere in the world. Using AR glasses or headsets, on-site personnel can share their view with remote technicians, who can then overlay instructions, schematics, or step-by-step troubleshooting guidance directly onto the worker’s field of vision. This allows maintenance teams to resolve issues faster and more accurately, without the need for travel, reducing downtime and operational costs. ... By using VR simulations, workers can familiarize themselves with equipment, troubleshoot issues, and practice responses to emergencies, all in a virtual setting. This hands-on experience builds confidence and competence, ultimately improving safety and efficiency when dealing with real equipment. As IIoT systems become more sophisticated, VR training can play a key role in ensuring that the workforce is well-prepared to handle advanced technologies without risking costly mistakes or accidents. ... In the future, we can expect even more seamless integration between AR/VR systems and IIoT platforms, where real-time data from sensors and machines is directly fed into the AR/VR environment, providing a comprehensive view of machine health, performance and issues. 


Just as DNA defines an organism’s identity, business continuity must be deeply embedded in every aspect of your organization. It is more than just a collection of emergency plans or procedures; it embodies a philosophy that ensures not only survival during disruptions, but long-term sustainability as well. ... An organization without continuity is like a tree without roots—fragile and vulnerable to the slightest shock. Continuity serves as an anchor, allowing organizations to navigate crises while staying aligned with their strategic goals. Any organization that aims to grow and thrive must take a proactive approach to continuity. Continuity strategies and initiatives can be seen as the roots of a tree, natural extensions that provide stability and sustain growth. ... It is essential that both leaders and team members possess the experience and skills needed to execute their work effectively. ... Thoroughly assess your key vulnerabilities. This involves two primary methods: a BIA, which analyzes the impacts of a disturbance over time to determine recovery priorities, resource requirements, and appropriate responses; and risk analysis, which identifies risks tied to prioritized activities and critical resources. Together, these two approaches offer a comprehensive understanding of your organization’s pain points.


Keep Your Network Safe From the Double Trouble of a ‘Compound Physical-Cyber Threat'

This phenomenon, a “compound physical-cyber threat,” where a cyberattack is intentionally launched around a heatwave or hurricane, for example, would have outsized and potentially devastating effects on businesses, communities, and entire economies, according to a 2024 study led by researchers at Johns Hopkins University. “Cyber-attacks are more disruptive when infrastructure components face stresses beyond normal operating conditions,” the study asserted. Businesses and their IT and risk management people would be wise to take notice, because both cyberattacks and weather-related disasters are increasing in frequency and in the cost they exact from their victims. ... Take what you learn from the risk assessment to develop a detailed plan that outlines the steps your organization intends to take to preserve cybersecurity, business continuity, and network connectivity during a crisis. Whether you’re a B2B or B2C organization, your customers, employees, suppliers and other stakeholders expect your business to be “always on,” 24/7/365. How will you keep the lights on, the lines of communications open, and your network insulated from cyberattack during a disaster? 


‘It Won’t Happen to Us:’ The Dangerous Mindset Minimizing Crisis Preparation

The main mistakes in crisis situations include companies staying silent and not releasing official statements from management, creating a vacuum of information and promoting the spread of rumors. ... First and foremost, companies should not underestimate the importance of communication, especially when things are not going well. During a crisis, many companies prefer to sit quietly and wait without informing or sharing anything about their measures and actions in connection with the crisis. This is the wrong approach. Silence gives competitors enough space to thrive and gain a market advantage. Meanwhile, journalists won’t stop working on hot stories. When you don’t share anything meaningful with them or your audience, they may collect and publish rumors and misinformation about your company. And the lack of comments creates the ground for negative interpretations. Therefore, transparency and efficiency are key principles of anti-crisis communication. If you are clear in your messages and give quick responses, it allows the company to control the information agenda. The surefire way to gain and maintain trust is to promptly and regularly inform your company’s investors during a crisis through your own channels. 

Daily Tech Digest - June 07, 2024

Technology, Regulations Can't Save Orgs From Deepfake Harm

Deepfakes have already become a tool for attackers behind business-leader impersonation fraud — in the past referred to as business email compromise (BEC) — where AI-generated audio and video of a corporate executive are used to fool lower-level employees into transferring money or taking other sensitive actions. In an incident disclosed in February, for example, a Hong Kong-based employee of a multinational corporation transferred about $25.5 million after attackers used deepfakes during a conference call to instruct the worker to make the transfers. ... Creating trusted channels of communication should be a priority for all companies, and not just for sensitive processes — such as initiating a payment or transfer — but also for communications to the public, says Deep Instinct's Froggett. "The best companies are already preparing, trying to think of the eventualities. ... You need legal, regulatory, and compliance groups — obviously, marketing and communication — to be able to mobilize to combat any misinformation," he says. 


Juniper Networks brings industry’s first and only AIOps to WAN routing, delivering AI-native insight for exceptional experiences

Juniper is introducing a new security insights Mist dashboard within its Premium Analytics product to provide comprehensive security event visibility and persona-based policy activation and threat responses. This increased visibility provides actionable intelligence to security teams, enabling them to quickly identify incidents and respond to threats in real-time—thereby improving the user experience. The security insights dashboard in Premium Analytics also helps break down siloed network and security management. ... Another innovation announced by Juniper, Routing Assurance, brings the company’s high performance, sustainable and versatile enterprise edge routing platforms under the Mist AI and cloud umbrella. ... In addition, Marvis, the industry’s first and only AI-Native VNA with a conversational interface built on more than seven years of learning, has been expanded to cover enterprise WAN edge routing. With Marvis’ conversational interface, IT teams can use simple language queries to identify and fix routing issues, including knowledge base queries powered by Generative AI.


How Sprinting Slows You Down: A Better Way to Build Software

First, start by killing the deadlines. In our model, engineers determine when a feature is ready to ship. They are thus able to make principled engineering decisions about what to implement now versus later, delivering better code than they would when making decisions driven by a two-week deadline. Second, assign smaller teams to features and give them greater scope. Because the teams are smaller (often just one engineer!), many new features are developed in parallel. These solo programmers or small teams own the entirety of implementation from back to front. There are no daily standups and needless communication is eliminated. And because the engineers control the implementation across the stack, they can make principled engineering decisions about how to build their functionality, rather than decisions constrained by the sliver of the codebase they happen to own, delivering a more cohesive implementation. The common thread between these two ideas is that they institutionally support making principled decisions, because good decisions today lead to better outcomes tomorrow. 


Why is site selection so important for the data center industry?

Climate considerations are paramount, with weather conditions impacting hazard exposure and vulnerability. Mitigating natural hazards such as floods, earthquakes, and hurricanes through engineered solutions is essential. Access to major highways and airports ensures logistical efficiency, particularly during construction and operation. The air quality surrounding a site affects equipment performance and employee health, necessitating measures to mitigate pollution. Historical data on natural disasters informs risk management strategies and facility design. Ground conditions must undergo thorough geotechnical investigation to assess structural stability and suitability for construction. The availability of robust communications infrastructure, particularly fiber-optic networks, is critical for seamless connectivity. Low latency, enabled by proximity to subsea cable landing sites and dense fiber networks, is imperative for high-performance applications. Geopolitical stability, regulatory environments, and taxation policies influence site selection decisions. Electrical power availability and cost significantly impact operational expenses, with renewable resources offering sustainability benefits.


Maximizing SaaS application analytics value with AI

AI analytics tools offer businesses the opportunity to optimize conversion rates, whether through form submissions, purchases, sign-ups or subscriptions. AI-based analytics programs can automate funnel analyses (which identify where in the conversion funnel users drop off), A/B tests (where developers test multiple design elements, features or conversion paths to see which performs better) and call-to-action button optimization to increase conversions. Data insights from AI and ML also help improve product marketing and increase overall app profitability, both vital components to maintaining SaaS applications. Companies can use AI to automate tedious marketing tasks (such as lead generation and ad targeting), maximizing both advertising ROI and conversation rates. And with ML features, developers can track user activity to more accurately segment and sell products to the user base. ... Managing IT infrastructure can be an expensive undertaking, especially for an enterprise running a large network of cloud-native applications. AI and ML features help minimize cloud expenditures by automating SaaS process responsibilities and streamlining workflows.


Inside the 'Secure By Design' Revolution

While not legally binding, the pledge encourages those that sign up to show demonstrable progress in each of the seven goals within a year. “One thing that we like, and I think a lot of industry likes, is it allows for flexibility in showing how you meet those goals,” Charley Snyder, head of security policy at Google, tells InformationWeek. If pledge signers are unable to show progress within a year, CISA encourages them to communicate what steps they did take and share what challenges they faced. The agency plans to offer its support throughout the year. “We are going to be working very closely with the pledge signers to help make progress on these pledge goals,” Zabierek explains. “We worked collaboratively with industry to develop the actions, and we're going to maintain that collaboration.” ... Tidelift, a company that partners with open-source maintainers, is not only applying the principles outlined in the pledge to its own software, but it also published an update on the ways it is working to help open-source maintainers achieve the pledge goals.


The next frontier: AI, VR, and the future of educational assessment

One of the most promising applications of AI in assessment is its ability to analyze vast amounts of data to identify patterns and trends in student performance, enabling educators to gain valuable insights into student progress and learning outcomes. By harnessing AI-powered analytics, educators can track student achievement over time, identify areas for improvement, and tailor instruction to address individual learning needs more effectively. ... In addition to AI, Virtual Reality (VR) is revolutionising the assessment landscape by offering immersive and interactive experiences that allows students to engage with content in three-dimensional, multisensory environments, providing opportunities for experiential learning and authentic assessment experiences. Furthermore, VR technology enables educators to assess higher-order thinking skills such as problem-solving, critical thinking, and creativity in ways that are not feasible with traditional assessment methods. Through VR-based scenarios and simulations, students can engage in complex, real-world challenges, make decisions, and experience the consequences of their actions


Cyber insurance isn’t the answer for ransom payments

Contrary to the belief that having cyber insurance increases the likelihood of ransom payments, Veeam’s research indicates otherwise. Despite only a minority of organizations possessing a policy to pay, 81% opted to do so. Interestingly, 65% paid with insurance and another 21% had insurance but chose to pay without making a claim. This implies that in 2023, 86% of organizations had insurance coverage that could have been utilized for a cyber event. The ransoms paid averages to be only 32% of the overall financial impact to an organization post-attack. Moreover, cyber insurance will not cover the entirety of the total costs associated with an attack. Only 62% of the overall impact is in some way reclaimable through insurance or other means, with everything else going against the organization’s bottom-dollar budget. ... Alarmingly, 63% of organizations are at risk of reintroducing infections while recovering from ransomware attacks or significant IT disasters. Pressured to restore IT operations quickly and influenced by executives, many organizations skip vital steps, such as rescanning data in quarantine, causing the likelihood of IT teams to inadvertently restore infected data or malware.


Generative AI agents will revolutionize AI architecture

AI agents possess advanced natural language processing (NLP) capabilities. They can comprehend, interpret, and generate human language, facilitating easy interaction and communication with users and other systems. These agents also work alongside other AI agents or human operators in collaborative and iterative workflows. Through continuous learning and feedback, they refine their outputs and improve overall performance. On paper, AI agents should be in wide use today. Look at all the pros I’ve listed. The downsides are much more difficult to understand. Even though you need tools to build AI agents, the tools are all over the place regarding what they are and how to use them. Don’t let vendors tell you otherwise. First, these are complex beasties to write and deploy. Architects who can design AI agents and developers who can effectively build AI agents are few and far between. I’ve witnessed teams announce they will use agent-based technology and then build something that falls far short of a solution for the proposed business case. Second, you can’t put much into these AI agents or they are no longer agents. You missed the point if your AI agents are vast clusters of GPUs. 


AI in Healthcare: Bridging the Gap Between Proof and Practice

“We see huge social impacts from AI in healthcare – in the data we’ve collected regionally in Pennsylvania, for example,” Dr. Sadeghian added. “Many rural areas have insufficient access to medical procedures. AI will impact society through both safety and convenience. Everybody has smartphones now; why not have the doctor in your hand? A cultural shift is underway.” AI can give a preliminary screening and keep people out of cities and congested areas, bringing access to more rural areas and saving office visits for people who need them. This also impacts transportation, walkability, and other aspects of civic planning – even pollution mitigation. Inviting the black box of AI into healthcare isn’t some hazy dream. It’s happening today. Younger generations are the most scientifically engaged ever, though, which means consensus-building on tech policy could move faster going forward. Politicians have noticed the social, cultural, and economic value of investing in science, technology, engineering, and mathematics education. 



Quote for the day:

"If you don't value your time, neither will others. Stop giving away your time and talents- start charging for it." -- Kim Garst

Daily Tech Digest - December 23, 2022

Why the industrial metaverse will eclipse the consumer one

The industrial metaverse is further ahead on the 3D front, with simulations and digital twins. The industrial metaverse is ahead on the standards front, with companies like Nvidia pushing potential standards such as Universal Scene Description (USD) through its Omniverse platform. USD has been characterized as doing for the metaverse what HTML did for the internet. In this regard, USD can lead to greater interoperability, [connecting] formerly disparate applications or ecosystems … to make workflows more seamless. ... Digital assets, similarly, are typically locked to a particular ecosystem, servicer or game. Many of the most transformative opportunities in the consumer space will also come with mainstream smart glasses, which are still years away before we see a stronger impact. The enterprise and industrial metaverses are also better grounded in ROI, meaning more trials and initial deployments have a higher potential to succeed or lead to more adoption compared to consumer efforts, which have seen more pushback, such as the addition of NFTs in games in Western markets [gaining] limited traction.


Surviving the Incident

The next step to the IR playbook is to identify the "crown jewels" of the organization — the critical systems, services, and operations that, if impacted by a cyber event, would disrupt business operations and cause a loss of revenue. Similarly, understanding the collected data type, how it is transmitted and stored, and who should access it must be mapped to ensure data security. Identifying and mapping critical systems can be accomplished through penetration tests, risk assessments, and threat modeling. A risk assessment is often the first tool to identify potential attack vectors and prioritize security events. However, to achieve a proactive stance, organizations are increasingly leveraging threat intelligence and modeling to identify and address vulnerabilities and security gaps early on before a known attack occurs. The primary goal is to identify weaknesses or vulnerabilities with assets to reduce the attack surface and close all the security gaps. This guide will focus on web application security as our attack scenario. Why web application security? 


Not everything we call AI is actually 'artificial intelligence'. Here's what you need to know

Most of what we know as AI today has narrow intelligence – where a particular system addresses a particular problem. Unlike human intelligence, such narrow AI intelligence is effective only in the area in which it has been trained: fraud detection, facial recognition or social recommendations, for example. AGI, however, would function as humans do. For now, the most notable example of trying to achieve this is the use of neural networks and “deep learning” trained on vast amounts of data. Neural networks are inspired by the way human brains work. Unlike most machine learning models that run calculations on the training data, neural networks work by feeding each data point one by one through an interconnected network, each time adjusting the parameters. As more and more data are fed through the network, the parameters stabilise; the final outcome is the “trained” neural network, which can then produce the desired output on new data – for example, recognising whether an image contains a cat or a dog. The significant leap forward in AI today is driven by technological improvements in the way we can train large neural networks, readjusting vast numbers of parameters in each run thanks to the capabilities of large cloud-computing infrastructures.


Metaverse Security Concerns Coming Into Focus as Businesses Plan For “Virtual Reality” Futures

Organizations smell potential here, with 23% responding that they are already developing initiatives even as basic specifications are still firming up. Of the respondents that expressed a desire to do business in the metaverse, the leading interest (44%) was customer engagement opportunities. Other popular areas are learning/training measures and workplace collaboration. But when asked about their concerns about expanding into this new area, respondents said that metaverse security was item #1 on the list. By and large, today’s security solutions have not yet considered the prospect of metaverse integration. Nevertheless, 86% of the respondents said that they would feel comfortable sharing user personal information between different metaverse services. Security providers may be waiting to see what users settle on in the metaverse before tailoring their products accordingly. Of the products available thus far, online games are the only ones drawing mass amounts of users (particularly the pre-existing Roblox and Fortnite) along with simple 3D world chat apps that allow users to appear as an avatar.


What’s next for AI

The big companies that have historically dominated AI research are implementing massive layoffs and hiring freezes as the global economic outlook darkens. AI research is expensive, and as purse strings are tightened, companies will have to be very careful about picking which projects they invest in—and are likely to choose whichever have the potential to make them the most money, rather than the most innovative, interesting, or experimental ones, says Oren Etzioni, the CEO of the Allen Institute for AI, a research organization. That bottom-line focus is already taking effect at Meta, which has reorganized its AI research teams and moved many of them to work within teams that build products. But while Big Tech is tightening its belt, flashy new upstarts working on generative AI are seeing a surge in interest from venture capital funds. Next year could be a boon for AI startups, Etzioni says. There is a lot of talent floating around, and often in recessions people tend to rethink their lives—going back into academia or leaving a big corporation for a startup, for example.


How to Innovate by Introducing Product Management in SMB and Non-Tech Companies

It’s common to find product managers and product owners in SaaS, technology, ecommerce, retail, and other B2C companies. Leadership in these companies long realized that understanding markets, determining product-market fits, defining customer personas, and understanding value propositions are all key to developing minimally viable solutions and delivering ongoing product enhancements. But identifying product managers and owners in non-tech companies, B2B businesses, SMBs, and the government remains a long-running work in progress. To start innovating, it comes down to transforming from stakeholder-led backlogs to product-managed, market-driven roadmaps. Tech, media, and ecommerce companies figure this out right away because chasing stakeholder-driven features often yields subpar results. More traditional businesses are likely to misdiagnose the problems with stakeholder-driven backlogs as a technology execution or platform issue. But there are a few secrets to making product management work even in the most traditional businesses.


IT Job Market: 2022's Wild Ride and What to Expect for 2023

Even as those layoff announcements were rolling in, the US Bureau of Labor Statistics job report for October showed a strong job market for tech pros and continued growth for remote jobs. In November that growth continued with IT industry association CompTIA reporting that US tech companies added 14,400 workers during the month, marking two consecutive years of monthly job growth in the sector. Tech jobs in all industry sectors increased by 137,000 positions. And while job postings for future hiring slipped in November, they still totaled nearly 270,000. As the tech sector heads into a changed 2023 employment market, it’s unclear how all these mixed signals will play out, although experts are starting to weigh in on best practices. Employers are likely looking carefully at budgets and head counts. But it will be a challenging line to walk. Employers have spent the past few years investing in employee experience programs and focusing on retaining their valuable talent. An abrupt change in direction such as mass layoffs will likely sour companies’ reputations as employers.


Inside the Next-Level Fraud Ring Scamming Billions Off Holiday Retailers

Besides the operation being stacked with technology know-how, Michael Pezely, Signifyd's director of risk intelligence, tells Dark Reading that the e-commerce threat group has sheer speed and volume of scam transactions on its side. "E-commerce orders — particularly at the enterprise level — arrive at dizzying speed," Pezely says. "Signifyd, for instance, processed as much as $42 million an hour in orders during Cyber Week. It would be virtually impossible for a human team to review that volume of orders for signs of fraud." Pezely added that merchants are on the lookout for goods being shipped to a foreign country, but this group of scammers places orders that appear to originate from the US and ship to US addresses. "Furthermore, if a merchant is relying on only its own transaction data, there likely will be a lag between the time a fraud attack begins and when it is recognized," Pezely explains. "Without having the benefit of seeing millions of transactions across thousands of merchants, a novel fraud attack might not be in plain sight for some time."


Protecting your organization from rising software supply chain attacks

The reason for the continued bombardment, said Moore, is increasing reliance on third-party code (including Log4j). This makes distributors and suppliers ever more vulnerable, and vulnerability is often equated with a higher payout, he explained. Also, “ransomware actors are increasingly thorough and use non-conventional methods to reach their targets,” said Moore. For example, using proper segmentation protocols, ransomware agents target IT management software systems and parent companies. Then, after breaching, they leverage this relationship to infiltrate the infrastructure of that organization’s subsidiaries and trusted partners. “Supply chain attacks are unfortunately common right now in part because there are higher stakes,” said Moore. “Extended supply chain disruptions have placed the industry at a fragile crossroads.” Supply chain attacks are low cost and can be minimal effort and have potential for high reward, said Crystal Morin, threat research engineer at Sysdig. And, tools and techniques are often readily shared online, as well as disclosed by security companies, who frequently post detailed findings.


Why User Journeys Are Critical to Application Detection

The first generation of cybersecurity detection technology is rules, but rules only detect known patterns. Individualized rules require expensive experts to maintain: each application is unique, and one must be extremely familiar with its business logic, log formats, how it is used, etc., in order to write and manage rules for detecting application breaches. ... Over a decade ago, the security market adopted statistical analysis to augment rule-based solutions in an attempt to provide more accurate detection for the infrastructure and access layers. However, UEBA failed to deliver as promised to dramatically increase accuracy and reduce false positive alerts due to a fundamentally mistaken assumption – that user behavior can be characterized by statistical quantities, such as the average daily number of activities. ... The main criteria for success in a detection solution is accuracy, which is dictated by the number of false positives, and the number of false negatives. The evolution of detection solutions led to the third generation of solutions analyzing Sequences of Activity, i.e. Journeys, to contextualize activity and improve detection accuracy.



Quote for the day:

"Before you revel in the anticipation of tomorrow, toil in the preparation of today." -- Tim Fargo

Daily Tech Digest - February 05, 2022

Quantum Computing: Researchers Achieve 100 Million Quantum Operations

Quantum computing systems are notoriously difficult to maintain in coherent states. The fragile nature of the "ordered chaos" is such that qubit information and qubit connection (entanglement) usually deteriorates in scales much lower than a second. The new research brings quantum computing coherency to human-perceivable scales of time. Using a technique they've termed "single shot readout," the researchers used precise laser pulses to add single electrons to qubits. ... While it may not sound like much, time flows differently in computing; going from stable quantum states in the order of fractions of a second up to five seconds increases the amount of useful computing time extracted from the available qubits. Moreover, it opens up new ways of increasing processing power beyond pure qubit count - the researchers calculate that they can perform around 100 million quantum operations in that five-second slice. So perhaps quantum computing will be a threat to Bitcoin and the current government, commercial and personal encryption schemes much earlier than expected?


Meta to bring in mandatory distances between virtual reality avatars

Meta announced on Friday that it is introducing personal boundaries on two VR apps: Horizon Worlds, where people can meet fellow VR users and design their own world; and Horizon Venues, which hosts VR events such as comedy shows or music gigs. The company said the distance between people will be the VR equivalent of four feet. “A personal boundary prevents anyone from invading your avatar’s personal space. If someone tries to enter your personal boundary, the system will halt their forward movement as they reach the boundary,” said the company. Meta is introducing the 4ft boundary as a default setting and will consider further changes such as letting people set their own boundaries. “We think this will help to set behavioural norms – and that’s important for a relatively new medium like VR,” said Meta. The UK data watchdog has also said it is seeking clarification from Meta about parental controls on the company’s popular Oculus Quest 2 VR headset, as campaigners warned that it could breach an online children’s safety code. 


Platform Engineering Challenge: Security vs. Dev Experience

There are a few things that you can do to make things easier for developers. First, ensure that all developers affected by the new policies are aware of them. That lack of knowledge is a common reason for mistakes. Once the developer team knows the security policies, they can work with those policies in mind. Next, remember that mistakes can happen. To mitigate this, automate as many items as possible with a proper continuous integration (CI) pipeline. In the CSP example, it is possible to crawl your application automatically and report CSP violations with a little bit of initial setup. Automating the verification step can drastically reduce the possibility of mistakes. This doesn’t just apply to CSP; it is relevant for any check you want to implement to ensure that your devs follow particular guidelines or standards. Another potential inter-team headache is vulnerabilities in third-party packages. Usually, the dev teams will install new packages. Depending on your business structure, though, it might fall to the platform or security teams to fix any vulnerabilities found. 


Satya Nadella: Microsoft has “permission to build the next Internet”

[In] the Microsoft I grew up in, I always think about three things—and we added a fourth. The three things that we always had are: we built tools for people to write software; we built tools for people to drive their personal and organizational productivity; and we built games. That’s the three things that Microsoft has done from time immemorial. The first game, I think, was built before Windows was there. existed on DOS. And so, to me, gaming, coding, productivity, or knowledge worker tools are at the core. The thing that we added pretty successfully—that most people thought we would never be able to do—is become an enterprise company... actually really build enterprise infrastructure... and business applications. And guess what? We now do that as well. ... Take what’s happening with the metaverse. What is the metaverse? Metaverse is essentially about creating games. It is about being able to put people, places, things [in] a physics engine and then having all the people, places, things in the physics engine relate to each other.


Council Post: Role of AI in creating inclusive products & solutions

Artificial Intelligence has the potential to create a more inclusive society. Let’s consider two dimensions: language and disability. Language is often the greatest barrier towards access to information, and hence, opportunities. Today language translation using AI is removing that barrier. For example, Microsoft’s Azure AI now empowers organizations to translate between 100 languages and dialects globally, making information in text and documents accessible to more than 5.6 billion people worldwide3. These include not only the world’s most spoken languages like English, Chinese, Hindi, Arabic and Spanish, but also dialects that are native or preferred by a smaller population. There are close to 7,000 languages spoken around the world, but sadly, every two weeks a language dies with its last speaker. Recent advances in AI have enabled inclusion of low resource, and often endangered, languages and dialects such as Tibetan, Assamese and Inuktitut. A multilingual AI model called Z-code combines several languages from a language family such as Hindi, Marathi, and Gujarati.


Mozilla is shutting down its VR web browser, Firefox Reality

A top VR web browser is closing down. Today, Mozilla announced it’s shutting down its Firefox Reality browser — the four-year-old browser built for use in virtual reality environments. The technology had allowed users to access the web from within their VR headset, doing things like visiting URLs, performing searches and browsing both the 2D and 3D internet using your VR hand controllers, instead of a mouse. Firefox Reality first launched in fall 2018 and has been available on Viveport, Oculus, Pico and HoloLens platforms through their various app stores. While capable of surfing the 2D web, the expectation was that users would largely use the new technology to browse and interact with the web’s 3D content, like 360-degree panoramic images and videos, 3D models and WebVR games, for example. But in an announcement published today, Mozilla says the browser will be removed from the stores where it’s been available for download in the “coming weeks.” Mozilla is instead directing users who still want to utilize a web browser in VR to Igalia’s upcoming open source browser, Wolvic, which is based on Firefox Reality’s source code.


How SQL can unify access to APIs

Software construction requires developers to compose solutions using a growing proliferation of APIs. Often there’s a library to wrap each API in your programming language of choice, so you’re spared the effort of making raw REST calls and parsing the results. But each wrapper has its own way of representing results, so when composing a multi-API solution you have to normalize those representations. Since combining results happens in a language-specific way, your solution is tied to that language. And if that language is JavaScript or Python or Java or C# then it is arguably not the most universal and powerful way to query (or update) a database. ... Steampipe is an open-source tool that fetches data from diverse APIs and uses it to populate tables in a database. The database is Postgres, which is, nowadays, a platform on which to build all kinds of database-like systems by creating extensions that deeply customize the core. One class of Postgres extension, the foreign data wrapper (FDW), creates tables from external data. Steampipe embeds an instance of Postgres that loads an API-oriented foreign data wrapper.


The wrong data privacy strategy could cost you billions

Regulators have long understood that de-identification is not a silver bullet due to re-identification with side information. When regulators defined anonymous or de-identified information, they refrained from giving a precise definition and deliberately opted for a practical one based on the reasonable risks of someone being re-identified. GDPR mentions “all the means reasonably likely to be used” whereas CCPA defines de-identified to be “information that cannot reasonably identify” an individual. The ambiguity of both definitions leaves places the burden of privacy risk assessment onto the compliance team. For each supposedly de-identified dataset, they need to prove that the re-identification risk is not reasonable. To meet those standards and keep up with proliferating data sharing, organizations have had to beef up their compliance teams. This appears to have been the process that Netflix followed when they launched a million-dollar prize to improve its movie recommendation engine in 2006. They publicly released a stripped-down version of their dataset with 500,000 movie reviews, enabling anyone in the world to develop and test prediction engines that could beat theirs.


What the Rise in Cyber-Recon Means for Your Security Strategy

Enterprises need to be aware that an increase in new cybercriminals armed with advanced technologies will increase the likelihood and volume of attacks. Standard tools must be able to scale to address potential increases in attack volumes. These tools also need to be enhanced with artificial intelligence (AI) to detect attack patterns and stop threats in real time. Critical tools should include anti-malware engines using AI detection signatures, endpoint detection and response (EDR), advanced intrusion prevention system (IPS) detection, sandbox solutions augmented with MITRE ATT&CK mappings and next-gen firewalls (NGFWs). In the best-case scenario, these tools are deployed consistently across the distributed network (data center, campus, branch, multi-cloud, home office, endpoint) using an integrated security platform that can detect, share, correlate and respond to threats as a unified solution. Cybercriminals are opportunistic, and they’re also growing increasingly crafty. We’re now seeing them spend more time on the reconnaissance side of cyberattacks. 


Want Real Cybersecurity Progress? Redefine the Security Team

The strategies described above share one trait in common: They all leave security mostly in the hands of an elite security team. No matter how many security tools a business buys, how far left it shifts security, or how many compliance rules it enforces, security operations still remain the realm primarily of security engineers and analysts (perhaps with just a bit of help from developers and IT Ops teams at businesses that take DevSecOps seriously). That fact is part of what makes the concept of collective security so innovative. It fundamentally breaks a mold that has been in place for decades: the mold that forces a single team to “own” security across the entire business, leaving little opportunity for stakeholders who are not security experts to contribute to security initiatives. By shifting to a strategy in which security is everyone’s responsibility — and, just as important, where everyone has the ability to define security rules and validate resources without having to know how to code or use sophisticated security tools — businesses make it possible for everyone to understand the state of cybersecurity in their organization, as well as to help enforce cybersecurity standards.



Quote for the day:

"Great leaders go forward without stopping, remain firm without tiring and remain enthusiastic while growing" -- Reed Markham

Daily Tech Digest - August 09, 2021

Digital transformation depends on diversity

Diversity of skills, perspectives, experiences and geographies has played a key role in our digital transformation. At Levi Strauss & Co., our growing strategy and AI team doesn’t include solely data and machine learning scientists and engineers. We recently tapped employees from across the organization around the world and deliberately set out to train people with no previous experience in coding or statistics. We took people in retail operations, distribution centers and warehouses, and design and planning and put them through our first-ever machine learning bootcamp, building on their expert retail skills and supercharging them with coding and statistics. We did not limit the required backgrounds; we simply looked for people who were curious problem solvers, analytical by nature and persistent to look for various ways of approaching business issues. The combination of existing expert retail skills and added machine learning knowledge meant employees who graduated from the program now have meaningful new perspectives on top of their business value. 


The hottest hyper-automation trends disrupting business today

The global pandemic has highlighted a need for more flexible customer service, using digital channels, as well as the possibility of organisations delivering service without being tied down to a particular location. Both factors have driven increased adoption of hyper-automation, and have led to more differentiation in customer service joining the biggest trends in the space. According to Luis Huerta, vice-president and intelligent automation practice head, Europe at Firstsource, “as fixed-schedule, routine, processes and tasks are automated in the back-office, the need for staff to be tied to a specific location diminishes. Furthermore, with hyper-automation, the role of human colleagues switches from hands-on task execution to managing and monitoring bots, and dealing with complex business exceptions.  ... As end customers are increasingly able to leverage automated channels to solve their needs, the pressure on support staff reduces and we give front-line colleagues an ability to focus on complex enquiries where a human touch is critical.


How Drife and blockchain are disrupting the ride-sharing industry

Blockchain technology offers a way to make life and work easier, regardless of the industry or class, and the ride-sharing industry is one a lot of disruptors and companies in the blockchain space are looking to become major players in. There have been a lot of bold claims about giving drivers and users more freedom through the use of decentralized technology such as that of the blockchain. One of the companies that made this claim is Drife. Drife is a decentralized ride-sharing and peer-to-peer ride-sharing platform that was started with the intent of empowering the drivers and riders within its ecosystem. The app is built on the Aeternity blockchain and its business model is built on taking zero commission from its drivers. Drife will instead charge drivers an annual fee on its platform to access the app. “We believe when there’s a driver who spends 14 to 16 hours behind the wheel, he deserves to take back all the income to his home,” said Sheikh. ... While Uber, Lyft and others were formed with good intentions, they have become centralized, continuously paying their drivers less and charging their riders more.


AI Wrote Better Phishing Emails Than Humans in a Recent Test

Researchers have long debated whether it would be worth the effort for scammers to train machine learning algorithms that could then generate compelling phishing messages. Mass phishing messages are simple and formulaic, after all, and are already highly effective. Highly targeted and tailored “spearphishing” messages are more labor intensive to compose, though. That's where NLP may come in surprisingly handy. At the Black Hat and Defcon security conferences in Las Vegas this week, a team from Singapore's Government Technology Agency presented a recent experiment in which they sent targeted phishing emails they crafted themselves and others generated by an AI-as-a-service platform to 200 of their colleagues. Both messages contained links that were not actually malicious but simply reported back clickthrough rates to the researchers. They were surprised to find that more people clicked the links in the AI-generated messages than the human-written ones—by a significant margin. “Researchers have pointed out that AI requires some level of expertise. It takes millions of dollars to train a really good model,” says Eugene Lim 


Data warehousing has problems. A data mesh could be the solution

Simply stated, a data mesh invests ownership of data in the people who create it. They’re responsible for ensuring quality and relevance and for exposing data to others in the organization who might want to use it. A consistent and organization-wide set of definitions and governance standards ensures consistency, and an overarching metadata layer lets others find what they need. “Data mesh is the concept of data-aligned data products,” Dehghani said in a video introduction. “Find the analytical data each part of the organization can share.” Dehghani lists eight attributes of a data mesh. Elements must be discoverable, understandable, addressable, secure, interoperable, trustworthy and natively accessible and they must have value on their own. The concept of decentralized data management is nothing new. Distributed databases rode the coattails of the client/server craze in the 1990s. Part of the appeal of the Hadoop software library of a decade ago was that processing was distributed to where data lived. 


Why AI isn't the only answer to cybersecurity [Q&A]

The battle between an attacker and the defenders is exactly the reason where the human factor comes into play and AI helps those defenders to focus and make decisions that optimize their time and skills. What we're seeing today is basic technology that’s designed for very specific attacks. It's only in 0.1 percent of attacks that very sophisticated technology is being used. There are millions of attacks every day, so you'll see advanced techniques; whereas, nine million other attacks are happening that are just super rudimentary, garden variety ransomware attacks and viruses. The latter are the mass of the attacks, and they're also the mass of the damage. If you're a nuclear reactor, then somebody's going to do massive harm, but if you're an average SMB, then you're a lot more susceptible to those garden variety attacks that we call drive-bys. Those machines aren't cutting edge and those attacks aren’t either. They're just the common things that have been learned over the past few years. However, with the forefront of attacks and premium ATPs, it'll be a battle of wits between the advanced technology versus their technology. 


When Will Quantum Computing Finally Become Real?

It's important to remember that quantum computers aren't just faster computers, but harbingers of an entirely new type of computation. “If realized in the best possible way imaginable, they would fundamentally change the world as we know it,” says Tom Halverson, a staff quantum scientist on the quantum computing team at management and information technology consulting firm Booz Allen Hamilton. “Because of this, many powerful forces are positioning themselves to be ‘the first,’” he states. “When the quantum computing revolution happens, it will happen quickly.” Quantum computing is already real, but it's simply not yet practical, observes Mario Milicevic, an IEEE member and a staff communication systems engineer at MaxLinear, a broadband communications semiconductor products firm. He notes that IT leaders will need to understand whether a quantum computer is the appropriate tool for the type of problem their organization is trying to solve. “For the majority of problems, classical computers will actually outperform quantum computers and do so at a much lower cost,” Milicevic states.


New connections between quantum computing and machine learning in computational chemistry

A quantum computer, integrated with our new neural-network estimator, combines the advantages of the two approaches. A quantum computer, integrated with our new neural-network estimator, combines the advantages of the two approaches. While a quantum circuit of choice is being executed, we exploit the power of quantum computers to interfere states over an exponentially-growing Hilbert space. After the quantum interference process has worked its course, we obtain a finite collection of measurements. Then a classical tool—the neural network—can use this limited amount of data to still efficiently represent partial information of a quantum state, such as its simulated energy. This handing of data from a quantum processor to a classical network leaves us with the big question: How good are neural networks at capturing the quantum correlations of a finite measurement dataset, generated sampling molecular wave functions? To answer this question, we had to think about how neural network could emulate fermionic matter. Neural networks had been used so far for the simulation of spin lattice and continuous-space problems.


The obstacles VR will overcome to go mainstream for business users

The truth is that VR is not far off becoming an essential tool for helping businesses to become smarter and more efficient in the way they train staff. For example, vocational training provider Mimbus uses VR training for a range of skills including carpentry, construction, decorating, electrical engineering, and food processing. Working with HP VR hardware, the immersive nature of VR removes the pressures of getting things wrong in real life and increases confidence when it comes to performing skills on the job. This solution can help businesses significantly cut training costs. VR can also help businesses to communicate with clients and design new products and services. In fact, in a sales and marketing capacity, studies have shown that customers have a 25% higher level of focus when in a virtual space, showing that VR is a great way to capture customers’ attention. Alongside biosensors and AI, VR could be used in the future to test how drivers feel about a new car interior before it has been built, or improve the outcome of virtual meetings and collaboration by capturing the nonverbal cues of participants. 


Disentangling AI, Machine Learning, and Deep Learning

Expert systems were proving to be brittle and costly, setting the stage for disappointment, but at the same time learning-based AI was rising to prominence, and many researchers began to flock to this area. Their focus on machine learning included neural networks, as well as a wide variety of other algorithms and models like support vector machines, clustering algorithms, and regression models. The turning over of the 1980s into the 1990s is regarded by some as the second AI winter, and indeed hundreds of AI companies and divisions shut down during this time. Many of these companies were engaged in building what was at the time high-performance computing (HPC), and their closing down was indicative of the important role Moore’s law would play in AI progress. Deep Blue, the chess champion system developed by IBM in the later 1990s, wasn’t powered by a better expert system, but rather a compute-enabled alpha-beta search. Why pay a premium for a specialized Lisp machine when you can get the same performance from a consumer desktop?



Quote for the day:

"Leaders must be good listeners. It_s rule number one, and it_s the most powerful thing they can do to build trusted relationships." -- Lee Ellis

Daily Tech Digest - August 02, 2020

Test Automation Best Practices

Designing tests and test data is the most crucial and time-consuming portion of the testing process. To be valid, test design must be precise in indicating the software functionalities to be tested. During the design phase, test conditions are identified based on specified test requirements, effective test modules and metrics are developed, and the anticipated behavior that will yield valid results is determined. Automated testing performs evaluations against manual test requirements to verify the reliability of the automated process. The use of an automation framework to configure testing modules characterizes automated testing. The automated framework supports the development of automated test scripts while it also monitors and maintains test results and related documentation. The structural framework for an automated test suite is the structural foundation of automated testing. Automation best focuses on identified priority factors for deployment. Manual testing can precede automated testing to contribute test conditions and data that test automation can use for regression and other types of testing.


Winning in Digital Innovation: Turning Scale and Legacy into Strengths

Over the past few years, disruptive forces have hit industry after industry. Travel has been disrupted by Priceline, Expedia, TripAdvisor, and Airbnb, transportation by Uber, and retail by Amazon and Alibaba. For established businesses, the most disruptive threats tend to come from outside traditional competition. New companies not only spot opportunities to create value that many incumbents fail to see, they also tend to operate with different business models. In fact, it’s no longer about having a level playing field. The disruptors are playing an entirely new game. Google is a master of this new game, converting an array of industries into advertising revenue. Amazon is another serial disruptor with its Amazon Prime now in a two-horse race with Netflix— undermining the model of traditional broadcast industries. Even those that have not yet been significantly impacted by these forces are not safe. Over the next five years, 40 percent of companies will face some form of digital disruption, according to Forbes magazine. Artificial intelligence is beginning to attack knowledge-based industries previously seen as safe from disruption, thanks in large part to companies such as Google and Amazon offering “AI on tap.”


How Payments Fintech Is Using Banking As A Service To Drive Growth

There are two core challenges that Banking as a Service helps an international payments company overcome. The first is the need for a regulated entity to be involved when it comes to offering many core banking type services such as checking accounts or savings and lending products. The second is that the technology requirements and capabilities to offer these products such as maintaining account ledgers for customer accounts are very different to those of core payments services. Obtaining the necessary regulatory licenses and building the technology can be two of the most expensive cost items for a financial services company. Banking as a Service exists to reduce both the time and cost spent Fintechs spend on these two items allowing to focus on their core businesses.  And for cross-border payments companies or Fintechs with international ambitions, a whole additional level of complexity comes by adding a geographic dimension. Regulations and technologies are very different country to country worldwide which means more time and more cost. We spoke with the CEOs and senior management of various Banking as a Service companies in the UK and US to understand what is driving the growth in Banking as a Service.


Here’s why IT departments need predictive analytics

AI-based detection platforms are capable of monitoring IT systems in real-time, checking for early signs of potential failures. To take one example, my company Appnomic has managed to handle 250,000 severe IT incidents for our clients with AI, which equals more than 850,000 man-hours of work. By harnessing machine learning, such platforms can use past data to learn how problems typically develop, enabling a company to step in before anything unfortunate occurs. In 2017, Gartner coined the term “artificial intelligence systems for IT operations” (AIOps) to describe this kind of AI-driven predictive analysis, and the market research firm believes that the use of AIOps will grow considerably over the next few years. In 2018, only 5 percent of large enterprises are using AIOps, but the firm estimates that by 2023 this figure is set to rise to 30 percent. This growth will be driven by the fact that several benefits come from the application of machine learning and data science to IT systems. Aside from detecting likely problems before they occur, AI can significantly reduce false alarms, in that it can gain a more reliable grasp of what actually leads to failures than previous technologies and human operators.


The Garmin Hack Was a Warning

Recent victims include not just Garmin but Travelex, an international currency exchange company, which ransomware hackers successfully hit on New Year’s Eve last year. Cloud service provider Blackbaud—relatively low-profile, but a $3.1 billion market cap—disclosed that it paid a ransom to prevent customer data from leaking after an attack in May. And those are just the cases that go public. “There are certainly rather large organizations that you are not hearing about who have been impacted,” says Kimberly Goody, senior manager of analysis at security firm FireEye. “Maybe you don’t hear about that because they choose to pay or because it doesn’t necessarily impact consumers in a way it would be obvious something is wrong.” Bigger companies make attractive ransomware targets for self-evident reasons. “They’re well-insured and can afford to pay a lot more than your little local grocery store,” says Brett Callow, a threat analyst at antivirus company Emsisoft. But ransomware attackers are also opportunistic, and a poorly secured health care system or city—neither of which can tolerate prolonged downtime—has long offered better odds for a payday than corporations that can afford to lock things down.


Facebook’s newest proof-of-concept VR headset looks like a pair of sunglasses

The proof-of-concept glasses aren’t just thin for looks, though — they also apparently beam images to your eyes in a way that’s different than standard VR headsets on the market today. I’ll let Facebook’s research team explain one of those techniques, called “holographic optics:” Most VR displays share a common viewing optic: a simple refractive lens composed of a thick, curved piece or glass or plastic. We propose replacing this bulky element with holographic optics. You may be familiar with holographic images seen at a science museum or on your credit card, which appear to be three-dimensional with realistic depth in or out of the page. Like these holographic images, our holographic optics are a recording of the interaction of laser light with objects, but in this case the object is a lens rather than a 3D scene. The result is a dramatic reduction in thickness and weight: The holographic optic bends light like a lens but looks like a thin, transparent sticker. The proof-of-concept headset also uses a technique Facebook calls “polarization-based optical folding” to help reduce the amount of space between the actual display and the lens that focuses the image.


Regulatory Uncertainty Greatest Problem For Blockchain Entrepreneurs, Says Producer

A regulatory environment characterized by widespread uncertainty is the single biggest challenge facing entrepreneurs in the digital currency and blockchain industry, according to J.D. Seraphine, who produced the docuseries “Open Source Money.” ... The U.S. government has had an overall uneven approach to regulating digital currencies and blockchain. It is a fairly new and complex technology so part of that is attributed to a learning curve for regulators and government officials. There are also multiple agencies who have claimed jurisdiction over the regulation of digital assets each classifying them differently, making it very difficult for companies to know how to operate in this industry in the U.S. The industry needs clear regulations and rules or for the government to step back completely like they did with the early days of the internet. I believe this gray area of uncertainty is the worst thing for entrepreneurs and companies attempting to operate here, and it has led to other countries moving ahead of the U.S. in pioneering what many are calling the most important technology since the creation of the internet.


Black Hat Virtually: An Important Time to Come Together as a Community

What concerns me the most about the moment we're in right now is that the bad actors are getting more sophisticated by the day. The simple attacks don't work as often anymore. I've seen this script numerous times in the course of my career when I look at the work our research teams publish. What worked six months ago may not work now. The only way we can fight back against a more sophisticated opponent is through knowledge-sharing and collective protection, both formal and informal. I'm grateful that the Black Hat community is there to swap war stories of how we've succeeded — and failed — against adversaries. Those conversations, even digitally, will make the difference. Cybersecurity is a team sport. The conversations that the cybersecurity community will have at this year's Black Hat (and at the subsequent DEF CON) will be instrumental in shaping how we all respond going forward as the world has changed. It's our responsibility, as a security community, to take this digital conference just as seriously as we would take an in-person one. 


Does Ethical use of AI is the only way forward!

Companies across the world are spending a lot of time and money in AI. The experts are doing a lot of research to Java develop high quality and extremely useful AI-based tools. AI is surely quite popular and soon, it will turn out to be quite popular. But, do you know why and how it should be used mostly? Are we only looking at the ethical uses of AI? Is anyone trying to make something nontechnical using AI as well? Sometimes, Artificial Intelligence is considered a bit overhyped. Although, it is not. And, we have been reading about some dangers of AI as well in the recent past. However, AI has mostly turned out to be useful for humans, but, the fact that AI will be mimicking human intelligence, thus, there is some bit of risk involved as well. Though AI is most useful, it can only be considered not very useful only when humans find it difficult to understand how to use it and make the most of it. Also, the intentions of the people who are using have to be good. AI itself is not harmful, but the users have to make sure that AI tools are used rightly. Artificial Intelligence causes a bit of worry for humans too.


Enterprise architecture heats up to meet changing needs

Skills is definitely one of the biggest challenges at the moment. Most people are making the decision to expand their EA, or start an EA if they haven't had one, and they just move in people from one box to another. Just because you can code software doesn't mean you can think like an architect. If you are a systems engineer, you know the processes and systems, but it doesn't mean you can do capability modeling and things like that. When it comes to tools, one of the biggest barriers to EAs moving forward is ROI. The reason it's hard to come up with an ROI is because people don't do activity-based accounting. They don't identify how long they spend doing all of their tasks. If they had that information, they could say, 'I can save this amount of money if I automate these things.' The other big barrier is that people on the business side are now tech-savvy, and they question the need for EA. They don't want EAs telling them to use certain technology. A lot of the business [leaders] are now thinking, 'IT is just a cost center. I want [IT] to be an order taker.' 




Quote for the day:

"And no heart has ever suffered when it goes in search of its dream." -- Paulo Coelho