Showing posts with label smart factory. Show all posts
Showing posts with label smart factory. Show all posts

Daily Tech Digest - July 09, 2025


Quote for the day:

"Whenever you see a successful person you only see the public glories, never the private sacrifices to reach them." -- Vaibhav Shah


Why CIOs see APIs as vital for agentic AI success

API access also goes beyond RAG. It allows agents and their underlying language models not just to retrieve information, but perform database mutations and trigger external actions. This shift allows agents to carry out complex, multi-step workflows that once required multiple human touchpoints. “AI-ready APIs paired with multi-agentic capabilities can unlock a broad range of use cases, which have enterprise workflows at their heart,” says Milind Naphade, SVP of technology and head of AI foundations at Capital One. In addition, APIs are an important bridge out of previously isolated AI systems. ... AI agents can make unprecedented optimizations on the fly using APIs. Gartner reports that PC manufacturer Lenovo uses a suite of autonomous agents to optimize marketing and boost conversions. With the oversight of a planning agent, these agents call APIs to access purchase history, product data, and customer profiles, and trigger downstream applications in the server configuration process. ... But the bigger wins will likely be increased operational efficiency and cost reduction. As Fox describes, this stems from a newfound best-of-breed business agility. “When agentic AI can dynamically reconfigure business processes, using just what’s needed from the best-value providers, you’ll see streamlined operations, reduced complexity, and better overall resource allocation,” she says.


What we can learn about AI from the ‘dead internet theory’

The ‘dead internet theory,’ or the idea that much of the web is now dominated by bots and AI-generated content, is largely speculative. However, the concern behind it is worth taking seriously. The internet is changing, and the content that once made it a valuable source of knowledge is increasingly diluted by duplication, misinformation, and synthetic material. For the development of artificial intelligence, especially large language models (LLMs), this shift presents an existential problem. ... One emerging model for collecting and maintaining this kind of data is Knowledge as a Service (KaaS). Rather than scraping static sources, KaaS creates a living, structured ecosystem of contributions from real users (often experts in their fields) who continuously validate and update content. This approach takes inspiration from open-source communities but remains focused on knowledge creation and maintenance rather than code. KaaS supports AI development with a sustainable, high-quality stream of data that reflects current thinking. It’s designed to scale with human input, rather than in spite of it. ... KaaS helps AI stay relevant by providing fresh, domain-specific input from real users. Unlike static datasets, KaaS adapts as conditions change. It also brings greater transparency, illustrating directly how contributors’ inputs are utilised. This level of attribution represents a step toward more ethical and accountable AI.


The Value of Threat Intelligence in Ensuring DORA Compliance

One of the biggest challenges for security teams today is securing visibility into third-party providers within their ecosystem due to their volume, diversity, and the constant monitoring required. Utilising a Threat Intelligence Platform (TIP) with advanced capabilities can enable a security team to address this gap by monitoring and triaging threats within third-party systems through automation. It can flag potential signs of compromise, vulnerabilities, and risky behaviour, enabling organisations to take pre-emptive action before risks escalate and impact their systems. ... A major aspect of DORA is implementing a robust risk management framework. However, to keep pace with global expansion and new threats and technologies, this framework must be responsive, flexible, and up-to-date. Sourcing, aggregating, and collating threat intelligence data to facilitate this is a time-exhaustive task, and unfeasible for many resource-stretched and siloed security teams. ... From tabletop scenarios to full-scale simulations, these exercises evaluate how well systems, processes, and people can withstand and respond to real-world cyber threats. With an advanced TIP, security teams can leverage customisable workflows to recreate specific operational stress scenarios. These scenarios can be further enhanced by feeding real-world data on attacker behaviours, tactics, and trends, ensuring that simulations reflect actual threats rather than outdated risks.


Why your security team feels stuck

The problem starts with complexity. Security stacks have grown dense, and tools like EDR, SIEM, SOAR, CASB, and DSPM don’t always integrate well. Analysts often need to jump between multiple dashboards just to confirm whether an alert matters. Tuning systems properly takes time and resources, which many teams don’t have. So alerts pile up, and analysts waste energy chasing ghosts. Then there’s process friction. In many organizations, security actions, especially the ones that affect production systems, require multiple levels of approval. On paper, that’s to reduce risk. But these delays can mean missing the window to contain an incident. When attackers move in minutes, security teams shouldn’t be stuck waiting for a sign-off. ... “Security culture is having a bit of a renaissance. Each member of the security team may be in a different place as we undertake this transformation, which can cause internal friction. In the past, security was often tasked with setting and enforcing rules in order to secure the perimeter and ensure folks weren’t doing risky things on their machines. While that’s still part of the job, security and privacy teams today also need to support business growth while protecting customer data and company assets. If business growth is the top priority, then security professionals need new tools and processes to secure those assets.”


Your data privacy is slipping away. Here's why, and what you can do about it

In 2024, the Identity Theft Resource Center reported that companies sent out 1.3 billion notifications to the victims of data breaches. That's more than triple the notices sent out the year before. It's clear that despite growing efforts, personal data breaches are not only continuing, but accelerating. What can you do about this situation? Many people think of the cybersecurity issue as a technical problem. They're right: Technical controls are an important part of protecting personal information, but they are not enough. ... Even the best technology falls short when people make mistakes. Human error played a role in 68% of 2024 data breaches, according to a Verizon report. Organizations can mitigate this risk through employee training, data minimization—meaning collecting only the information necessary for a task, then deleting it when it's no longer needed—and strict access controls. Policies, audits and incident response plans can help organizations prepare for a possible data breach so they can stem the damage, see who is responsible and learn from the experience. It's also important to guard against insider threats and physical intrusion using physical safeguards such as locking down server rooms. ... Despite years of discussion, the U.S. still has no comprehensive federal privacy law. Several proposals have been introduced in Congress, but none have made it across the finish line. 


How To Build Smarter Factories With Edge Computing

According to edge computing experts, these are essentially rugged versions of computers, of any size, purpose-built for their harsh environments. Forget standard form factors; industrial edge devices come in varied configurations specific to the application. This means a device shaped to fit precisely where it’s needed, whether tucked inside a machine or mounted on a factory wall. ... What makes these tough machines intelligent? It’s the software revolution happening on factory floors right now. Historically, industrial computing relied on software specially built to run on bare metal; custom code directly installed on specific machines. While this approach offered reliability and consistent, deterministic performance, it came with significant limitations: slow development cycles, difficult updates and vendor lock-in. ... Communication between smart devices presents unique challenges in industrial environments. Traditional networking approaches often fall short when dealing with thousands of sensors, robots and automated systems. Standard Wi-Fi faces significant constraints in factories where heavy machinery creates electromagnetic interference, and critical operations can’t tolerate wireless dropouts.


Fighting in a cloudy arena

“There are a few primary problems. Number one is that the hyperscalers leverage free credits to get digital startups to build their entire stack on their cloud services,” Cochrane says, adding that as the startups grow, the technical requirements from hyperscalers leave them tied to that provider. “The second thing is also in the relationship they have with enterprises. They say, ‘Hey, we project you will have a $250 million cloud bill, we are going to give you a discount.’ Then, because the enterprise has a contractual vehicle, there’s a mad rush to use as much of the hyperscalers compute as possible because you either lose it or use it. “At the end of the day, it’s like the roach motel. You can check in, but you can’t check out,” he sums up. ... "We are exploring our options to continue to fight against Microsoft’s anti competitive licensing in order to promote choice, innovation, and the growth of the digital economy in Europe." Mark Boost, CEO of UK cloud company Civo, said: ”However they position it, we cannot shy away from what this deal appears to be: a global powerful company paying for the silence of a trade body, and avoiding having to make fundamental changes to their software licensing practices on a global basis.” In the months that followed this decision, things got interesting.


How passkeys work: The complete guide to your inevitable passwordless future

Passkeys are often described as a passwordless technology. In order for passwords to work as a part of the authentication process, the website, app, or other service -- collectively referred to as the "relying party" -- must keep a record of that password in its end-user identity management system. This way, when you submit your password at login time, the relying party can check to see if the password you provided matches the one it has on record for you. The process is the same, whether or not the password on record is encrypted. In other words, with passwords, before you can establish a login, you must first share your secret with the relying party. From that point forward, every time you go to login, you must send your secret to the relying party again. In the world of cybersecurity, passwords are considered shared secrets, and no matter who you share your secret with, shared secrets are considered risky. ... Many of the largest and most damaging data breaches in history might not have happened had a malicious actor not discovered a shared password. In contrast, passkeys also involve a secret, but that secret is never shared with a relying party. Passkeys are a form of Zero Knowledge Authentication (ZKA). The relying party has zero knowledge of your secret, and in order to sign in to a relying party, all you have to do is prove to the relying party that you have the secret in your possession.


Crafting a compelling and realistic product roadmap

The most challenging aspect of roadmap creation is often prioritization. Given finite resources, not everything can be built at once. Effective prioritization requires a clear framework. Common methods include scoring features based on business value versus effort, using frameworks like RICE, or focusing on initiatives that directly address key strategic objectives. Be prepared to say “no” to good ideas that don’t align with current priorities. Transparency in this process is vital. Communicate why certain items are prioritized over others to stakeholders, fostering understanding and buy-in, even when their preferred feature isn’t immediately on the roadmap. ... A product roadmap is a living document, not a static contract. The B2B software landscape is constantly evolving, with new technologies emerging, customer needs shifting, and competitive pressures mounting. A realistic roadmap acknowledges this dynamism. While it provides a clear direction, it should also be adaptable. Plan for regular reviews and updates – quarterly or even monthly – to adjust based on new insights, validated learnings, and changes in the market or business environment. Embrace iterative development and be prepared to pivot or adjust priorities as new information comes to light. 


Are software professionals ready for the AI tsunami?

Modern AI assistants can translate plain-English prompts into runnable project skeletons or even multi-file apps aligned with existing style guides (e.g., Replit). This capability accelerates experimentation and learning, especially when teams are exploring unfamiliar technology stacks. A notable example is MagicSchool.com, a real-world educational platform created using AI-assisted coding workflows, showcasing how AI can powerfully convert conceptual prompts into usable products. These tools enable rapid MVP development that can be tested directly with customers. Once validated, the MVP can then be scaled into a full-fledged product. Rapid code generation can lead to fragile or opaque implementations if teams skip proper reviews, testing, and documentation. Without guardrails, it risks technical debt and poor maintainability. To stay reliable, agile teams must pair AI-generated code with sprint reviews, CI pipelines, automated testing, and strategies to handle evolving features and business needs. Recognising the importance of this shift, tech giants like Amazon (CodeWhisperer) and Google (AlphaCode) are making significant investments in AI development tools, signaling just how central this approach is becoming to the future of software engineering.

Daily Tech Digest - February 06, 2025


Quote for the day:

"Success is liking yourself, liking what you do, and liking how you do it." -- Maya Angelou


Here’s How Standardization Can Fix the Identity Security Problem

Fragmentation in identity security doesn’t only waste resources, it also leaves businesses exposed to threat actors, leading to potential reputational and financial damage if systems are compromised. Misconfigurations often arise when teams are pressured to deliver quickly without adequate frameworks. Fragmentation also forces teams to juggle mismatched tools, creating gaps in oversight. These gaps become weak points for attackers, leading to cascading failures. ... Standardization transforms the complexity of identity management into a straightforward, structured process. Instead of piecing together bespoke solutions, leveraging established frameworks can deliver robust, scalable and future-proof security. ... Developers often need to weigh short-term challenges against long-term gains. Adopting standardized identity frameworks is one decision where the long-term benefits are clear. Increased efficiency, security and scalability contribute to a more sustainable development process. Standardization equips us with ready-to-use solutions for essential features, freeing us to focus on innovation. It also enables applications to meet compliance requirements without added strain on teams. By investing in frameworks like IPSIE, we can future-proof our systems while reducing the burden on individual developers.


How Data Contracts Support Collaboration between Data Teams

Data contracts are what APIs are for software systems, Christ said. They are an interface specification between a data provider and their data consumers. Data contracts specify the provided data model with the syntax, format, and semantics, but also contain data quality guarantees, service-level objectives, and terms and conditions for using the data, Christ mentioned. They also define the owner of the provided data product that is responsible if there are any questions or issues, he added. Data mesh is an important driver for data contracts, as data mesh introduces distributed ownership of data products, Christ said. Before that, we usually had just one central team that was responsible for all data and BI activities, with no need to specify interfaces with other teams. ... Data providers benefit by gaining visibility into which consumers are accessing their data. Permissions can be automated accordingly, and when changes need to be implemented in a data product, a new version of the data contract can be introduced and communicated with the consumers, Christ said. With data contracts, we have very high-quality metadata, Christ said. This metadata can be further leveraged to optimize governance processes or build an enterprise data marketplace, enabling better discoverability, transparency, and automated access management across the organization to make data available for more teams.


How Agentic AI will be Weaponized for Social Engineering Attacks

To combat advanced social engineering attacks, consider building or acquiring an AI agent that can assess changes to the attack surface, detect irregular activities indicating malicious actions, analyze global feeds to detect threats early, monitor deviations in user behavior to spot insider threats, and prioritize patching based on vulnerability trends. ... Security awareness training is a non-negotiable component to bolstering human defenses. Organizations must go beyond traditional security training and leverage tools that can do things like assign engaging content to users based on risk scores and failure rates, dynamically generate quizzes and social engineering scenarios based on the latest threats, trigger bite-sized refreshers, etc. ... Human intuition and vigilance are critical in combating social engineering threats. Organizations must double down on fostering a culture of cybersecurity, educating employees on the risks of social engineering and the impact on the organization, training to identify and report such threats, and empowering them with tools that can improve security behavior. Gartner predicts that by 2028, a third of our interactions with AI will shift from simply typing commands to fully engaging with autonomous agents that can act on their own goals and intentions. Obviously, cybercriminals won’t be far behind in exploiting these advancements for their misdeeds.

As businesses expand their cloud services and integrate AI, IoT, and other digital tools, the attack surface grows exponentially. Cybercriminals are exploiting this vast surface with increasingly sophisticated tactics, including AI-driven attacks that can bypass traditional security measures. Lack of visibility across multicloud environments: Many businesses rely on a combination of private, public, and hybrid cloud solutions, which can create visibility gaps. Security teams struggle to manage and monitor resources across various platforms, making it difficult to detect vulnerabilities or respond to threats in real time. Misconfigurations and human error: Cloud misconfigurations remain one of the leading causes of data breaches. ... Ongoing risk assessments are essential for identifying vulnerabilities and understanding the potential attack vectors in cloud environments. Regular penetration testing can help organisations identify and patch security gaps proactively. These assessments, combined with continuous monitoring, ensure the security posture evolves alongside emerging threats. Centralised threat detection and response: Implementing a centralised security platform that aggregates data from multiple cloud environments can streamline threat detection and response. By correlating network events with cloud activities, security teams can gain deeper insights into potential risks and reduce the mean time to resolution (MTTR) for incidents.


Is 2025 the year of quantum computing?

As quantum computing research gradually inches toward real-world usability, you might wonder where we’ll see the impacts of this technology, both short- and long-term. One of the most immediately important areas is cryptography. Since a quantum computer can take on many states simultaneously, something like factoring large numbers can proceed in parallel, relying on the superposition of particle states to explore many possible outcomes at once. There is also a tantalizing potential for cross-over between machine learning and quantum computing. Here, the probabilistic nature of neural networks and AI in general seems to lend itself to being modeled at a more fundamental level, and with far greater efficiency, using the hardware capabilities of quantum computers. How powerful would an AI system be if it rested on quantum hardware? Another area is the development of alternative energy sources, including fusion. Using matter itself to model reality opens up possibilities we can’t yet fully predict. Drug discovery and material design are also areas of interest for quantum calculations. At the hardware level, quantum systems allow us to use matter itself to model the complexity of designing useful matter. These and other exciting developments, especially in error correction, seem to indicate quantum computing’s time is finally coming. 


The overlooked risks of poor data hygiene in AI-driven organizations

A significant risk posed by AI-enabled apps is called ‘AI oversharing,’ where enterprise applications expose sensitive information through poorly defined access controls. This is especially prevalent in retrieval-augmented generation (RAG) applications when original source permissions aren’t honoured throughout the system. Imagine for a minute if you were an enterprise with millions of documents that contain decades of enterprise knowledge and you wanted to leverage AI through a RAG-based architecture. A typical approach is to load all of those documents into a vector database. If you exposed that data through an AI chatbot without honouring the original permissions on those documents, then anyone issuing a prompt could access any of that data. ... Organizations need to implement a methodical process for assessing and preparing data for AI applications, as sophisticated attacks like prompt injection and unauthorized data access become more prevalent. Begin with a thorough inventory of your data stores, including file and documents stores, support and ticketing system, and any other data sources that you’ll source your enterprise data from. Then work to understand its potential use in AI applications and identify critical gaps or inconsistencies. 


Who Is Attacking Smart Factories? Understanding the Evolving Threat Landscape

Cybercriminals no longer rely on broad, generalized attacks but have begun to tailor their malware specifically for OT systems. For example, they know which files on engineering workstations or MES systems are most important for production and will specifically target them for encryption. This shift has also seen an increase in multi-vector attacks. Attackers might gain initial access through phishing emails but, once inside, use tools that enable them to move seamlessly between IT and OT networks. The goal is no longer just to hold data hostage but to encrypt or destroy files that are crucial to the manufacturing process. With this targeted approach, attackers increase the likelihood that companies will pay the ransom, especially when systems critical to production are held hostage. ... The increasing sophistication of these attacks highlights the need for manufacturers to adopt a holistic approach to cybersecurity. While technical countermeasures like firewalls, endpoint security, and intrusion detection systems are important, they are not enough on their own. A comprehensive security strategy must address both IT and OT environments and recognize the interdependence between these systems. Manufacturers should focus on risk assessment across their entire value chain, from the factory floor to the supply chain and customer-facing systems. 


Legislators demand truth about OPM email server

Erik Avakian, security counselor at Info-Tech Research Group said the “recent development regarding OPM and the alleged issues regarding an email server being deployed on the agency network and emails being distributed by the agency to federal employees raise potential security and privacy concerns that, if substantiated, could be out of sync with well-defined cybersecurity best practices and privacy regulations.” Most important, he said, would be the way in which the system had been deployed onto the federal network, “particularly in light of the many existing US federal government-required processes, procedures, and checks a system would need to undergo before receiving green light approval for such a fast-tracked deployment. There could be fast-track processes in place for such instances.” However, even in such cases, said Avakian, “any deployment of systems or tools would certainly, as best practice, need to be reviewed for security vulnerabilities, and its architecture checked and hardened, at a minimum, to be aligned with the federal security requirements for systems deployed on the network prior to going live.” The question would be whether the processes were followed, he said. “In any case, there could be quite a checklist of issues regarding Compliance with Cybersecurity Frameworks, Best Practices, and the Federal Government’s Memo regarding the Implementation of Zero Trust, to name a few, as well as numerous privacy laws.”


Open-Source AI: Power Shift or Pandora's Box?

"This is no longer just a technological race, it’s a geopolitical one. While open-source models offer accessibility, their full training pipeline and datasets often remain undisclosed. Nations are using AI to influence global markets, trade policies and digital sovereignty," said Amitkumar Shrivastava, global distinguished engineer and head of AI at Fujitsu Consulting India. "The real winners will be those who balance innovation with regulatory foresight and ethical AI practices." While open-source AI fosters innovation, it also raises concerns about security, compliance and ethical risks. Increased accessibility introduces challenges such as misinformation, deepfake generation and unauthorized automation. "DeepSeek is open-source, which is very important, as it allows users to download the models and run them on their own hardware if they have the capacity. We are already seeing others create local installations of DeepSeek models even without GPUs," Professor Balaraman Ravindran, IIT Madras, wrote in his blog. "Assuming that DeepSeek's claims on infrastructure reductions are true, some researchers are still not fully convinced and are in the process of verifying the claims. There will be an immediate breakdown of the monopolistic hold of a few technology giants with deep pockets to control the AI market - much like India developing cheap Corona vaccine," said Dr. Sanjeev Kumar.


The Cost of AI Security

The cost of AI and its security needs is going to be an ongoing conversation for enterprise leaders. “It’s still so early in the cycle that most security organizations are trying to get their arms around what they need to protect, what’s actually different. What do [they] already have in place that can be leveraged?” says Saeedi. Who is a part of these evolving conversations? CISOs, naturally, have a leading role in defining the security controls applied to an enterprise’s AI tools, but given the growing ubiquity of AI a multistakeholder approach is necessary. Other C-suite leaders, the legal team, and the compliance team often have a voice. Saeedi is seeing cross-functional committees forming to assess AI risks, implementation, governance, and budgeting. As these teams within enterprises begin to wrap their heads around various AI security costs, the conversation needs to include AI vendors. “The really key part for any security or IT organization, when [we’re] talking with the vendor is to understand, ‘We’re going to use your AI platform but what are you going to do with our data?’” Is that vendor going to use an enterprise’s data for model training? How is that enterprise’s data secured? How does an AI vendor address the potential security risks associated with the implementation of its tool?

Daily Tech Digest - July 06, 2022

10 Things You Are Not Told About Data Science

Many data scientists become disillusioned when they are hired for statistics and machine learning, but instead find themselves being the resident “IT expert” instead. This phenomena is not new and actually predates data science. Shadow information technology (shadow IT) describes office workers who create systems outside their IT department. This includes databases, dashboards, scripts, and code. This used to be frowned on in organizations, as it is unregulated and operating outside the IT department’s scope of control. However, one benefit of the data science movement is it has made shadow IT more accepted as a necessity for innovation. Rather than be disillusioned, a data scientist can gain proficiency in SQL, programming, cloud platforms, web development, and other useful technologies. After all, a data scientist works with data and that implicitly can lead to IT-work. It can also make their work streamlined and more accessible to others, and open up possibilities for statistical and machine learning models.


The connected nature of smart factories is exponentially increasing the risk of cyber attacks

The research found that, for many organizations, cybersecurity is not a major design factor; only 51% build cybersecurity practices in their smart factories by default. Unlike IT platforms, all organizations may not be able to scan machines at a smart factory during operational uptime. System-level visibility of IIoT and OT devices is essential to detect when they have been compromised; 77% are concerned about the regular use of non-standard smart factory processes to repair or update OT/IIOT systems. This challenge partly originates from the low availability of the correct tools and processes, however 51% of organizations, said that smart factory cyberthreats primarily originate from their partner and vendor networks. Since 2019, 28% noted a 20% increase in employees or vendors bringing in infected devices, such as laptops and handheld devices, to install/patch smart-factory machinery. ... When it comes to incidents, only a few of the organizations surveyed claimed that their cybersecurity teams have the required knowledge and skills to carry out urgent security patching without external support.


Google’s Powerful Artificial Intelligence Spotlights a Human Cognitive Glitch

The human brain is hardwired to infer intentions behind words. Every time you engage in conversation, your mind automatically constructs a mental model of your conversation partner. You then use the words they say to fill in the model with that person’s goals, feelings and beliefs. The process of jumping from words to the mental model is seamless, getting triggered every time you receive a fully fledged sentence. This cognitive process saves you a lot of time and effort in everyday life, greatly facilitating your social interactions. However, in the case of AI systems, it misfires – building a mental model out of thin air. A little more probing can reveal the severity of this misfire. Consider the following prompt: “Peanut butter and feathers taste great together because___”. GPT-3 continued: “Peanut butter and feathers taste great together because they both have a nutty flavor. Peanut butter is also smooth and creamy, which helps to offset the feather’s texture.” The text in this case is as fluent as our example with pineapples, but this time the model is saying something decidedly less sensible. 


VMware report finds org modernization cannot succeed without observability

Enterprises have evolved their cloud strategies to multicloud environments and are adopting more containers, microservices and cloud-native technologies. This is creating increasingly distributed systems, making it harder to gain a comprehensive view into how they’re performing, Weiss said. As a result, legacy monitoring tools are obsolete for modern applications. “The reason for that is the change to cloud computing multi-services. Together with the amount of data that is being generated in these applications, you can’t cope with it anymore,” Weiss said. Monitoring merely collects data from the system and alerts admins to something being wrong. Observability goes beyond monitoring to interpret the data, providing answers on why something is wrong and how to fix it, allowing teams to pinpoint the root cause, minimize downtime and increase operational efficiency. “Previously, the solution was to put an agent on the server that can do everything, collect everything – but there is no place to put the agent anymore,” Weiss told VentureBeat. “Services are becoming very volatile. They’re disappearing. They’re here now, they’re not here tomorrow. I’m not even talking about serverless. So, that’s a change that is trending.”


A breakthrough algorithm developed in the US can predict crimes a week ahead

The concept might sound interesting, but the actual application was dodgy. As investigations later showed, almost half of the alleged perpetrators on the list had never been charged for illegal possession of arms, while others had not been charged with serious offenses before. A Technology Review report in 2019 detailed how risk assessment algorithms that determined whether an individual should be sent to jail or not were trained on historically biased data. So, when researchers at the University of Chicago, led by assistant professor Ishanu Chattopadhyay, tried to build their algorithm, they wanted to avoid past mistakes. The algorithm divides a city into 1,000 square feet tiles and uses the historical data on violent and property crimes to predict future events. The researchers told Bloomberg that their model is different from other such algorithmic predictions since the other look at crime as emerging from hotspots and spreading to other areas. However, such approaches, the researchers argue, miss the complex social environment of cities and are also biased by the surveillance used by the state for law enforcement. 


7 key new features in SingleStoreDB

SingleStore has also enhanced SingleStoreDB with the addition of Code Engine with Wasm. Now users can bring external data and compute algorithms to power new real-time use cases within the database engine, drawing on WebAssembly. With Code Engine with Wasm, developers can securely, natively, and efficiently execute rich computation in the database using their programming language of choice. For computations and algorithms that are not easily expressed in SQL, Wasm support in SingleStoreDB brings algorithms to the data without having to move that data outside of the database. With SingleStoreDB Universal Language support, enterprises can now quickly integrate machine learning into real-time applications and dashboards.  ... The latest release of SingleStoreDB also includes Data API, enabling seamless integrations with applications. Developers can use Data API to build serverless applications including web and mobile apps. Data API uses HTTP to run SQL operations against the database rather than maintaining a persistent TCP connection. The connection is dynamically reconfigured, and each request-response is its own connection.


Researchers Infuse ‘Human Guesses’ In Robots To Navigate Blind Spots

A novel methodology developed by MIT and Microsoft researchers identifies instances in which autonomous systems have “learned” from training samples that don’t reflect what happens in the real world. Engineers may employ this idea to improve the security of robots and autonomous vehicles that use artificial intelligence. For instance, to prepare them for nearly every eventuality on the road, the artificial intelligence (AI) systems that drive autonomous cars go through extensive training in virtual simulations. But occasionally the car makes an unforeseen error as a result of a situation that ought to alter the way it acts but doesn’t. Consider an autonomous car without the necessary sensors, which would be unable to discern between drastically different conditions like large, white cars and ambulances with red, flashing lights on the road. A driver may not know to slow down and pull over when an ambulance starts its sirens as it is traveling down the highway because it cannot tell the ambulance from a huge white sedan. Like with conventional methods, the researchers trained an AI system using simulations. 


Integrating blockchain-based digital IDs into daily life

While blockchain’s elevator pitch is heavily inclined toward immutability, the technology boasts multiple advantages over traditional software and paper-based systems. The opinions regarding the benefits of blockchain boil down to the control over personal information. Self-sovereignty stands as one of the biggest benefits of blockchain-based digital IDs, according to Martis. This means that blockchain empowers users to share partial or selective information with their service providers instead of handing over their complete identity. With blockchain-based IDs eradicating the misuse of information, experts envision the birth of a truly trustless system without the involvement of third parties. Gentry, too, reiterated verifiability, traceability and uniqueness as some of the major benefits brought about by blockchain, as she highlighted that blockchain IDs cannot be duplicated because it's on the distributed ledger. “All the Digital ID can be verified on the blockchain and can be traced back to the owners' account which can also be used for Know Your Customer,” she added.


Neurodiversity in Cybersecurity: Broadening Perspectives, Offering Inclusivity

“There are not enough skilled people in this field, but neurodivergent individuals bring an essential skillset to cybersecurity -- hyper focus on analyzing data and identifying trends,” explains Rex Johnson, executive director of cybersecurity at CAI. “Not everyone has this ability, or at least do it well, except for neurodiverse talent.” To reach out to neurodiverse professionals, Johnson says organizations must look beyond traditional recruiting methods. “Depending on the need, consider a team of neurodivergent individuals who work under a supervisor who understands how to manage this dynamic and be the liaison to other management teams,” he advises. They can look for organizations that implement an end-to-end neurodiversity employment program that not only bring the right neurodivergent teammate in the door, but also work with the employer to create workplace accommodations that increase retention, morale, and productivity. “Not everyone is the same. People are inspired and motivated by many different visions and missions,” Johnson adds.


Staying protected amidst the cyber weapons arms race

Most would not like to admit it, but vulnerabilities are inevitable. Although a ransomware event is likely to affect an organisation at some point, ransomware itself is not completely out of the control of a business. Vendors have an ethical imperative to be transparent with the customer community when they become aware of a vulnerability in their product, providing clear assessment of impact and steps to remediate. As soon as any vulnerability in its software is known, speed and effectiveness in sharing relevant information and patches with customers and stakeholders are crucial. Once alerted, the impacted customer community then has a shared responsibility to action this information, in the context of the impact on their business and what that means for their resilience and continuity of operations. Here the vendor’s responsibility clearly becomes double-edged. Vendors must be transparent so their customers can apply the fix, yet this sets off a ticking time bomb as threat actors continuously scour the internet for this type of information, hoping to exploit the vulnerability before organisations have had time to apply the patch. 



Quote for the day:

"People seldom improve when they have no other model but themselves." -- Oliver Goldsmith

Daily Tech Digest - February 15, 2020

How Can Companies Minimize Risk Against Emerging Threats?

Photo:
It's estimated that there is a ransomware attack every 14 seconds somewhere in the world. By far, the single greatest vulnerability that companies continue to face is the infiltration of malware from phishing campaigns. Other vulnerabilities stem from the proliferation of IoT components, cloud storage and computing, and new data and financial apps that external vendors provide and install on the organization's system. To battle the threat, I believe a dedicated effort must go all the way up to the C-level to ensure that everyone is put to the task because when an intrusion attempt succeeds, it's already too late. It can take hackers as little as 19 minutes to get into a system and up to eight hours for many companies to respond due to their obligation to internal processes. Many larger companies install a variety of specialized solutions to protect themselves in different areas, and it seems that endless products answer very specific threats. Too often, though, that buildup of solutions from a multitude of vendors exacerbates the risk that each patch is intended to guard against.



Emotion AI researchers say overblown claims give their work a bad name


Emotion recognition, also known as affective computing, is still a nascent technology. As AI researchers have tested the boundaries of what we can and can’t quantify about human behavior, the underlying science of emotions has continued to develop. There are still multiple theories, for example, about whether emotions can be distinguished discretely or fall on a continuum. Meanwhile, the same expressions can mean different things in different cultures. In July, a meta-study concluded that it isn’t possible to judge emotion by just looking at a person’s face. The study was widely covered, often with headlines suggesting that “emotion recognition can’t be trusted.” Emotion recognition researchers are already aware of this limitation. The ones we spoke to were careful about making claims of what their work can and cannot do. Many emphasized that emotion recognition cannot actually assess an individual’s internal emotions and experience. It can only estimate how that individual’s emotions might be perceived by others, or suggest broad, population-based trends.


AIoT – Convergence of Artificial Intelligence with the Internet of Things


Large volumes of confidential company information and user data are tempting targets for dark web hackers as well as the global government entities. The high level of risk has also brought in newer and more responsibilities that accompany the increased capability. Sensors are now applied to almost everything. This indicates that infinitely more data can be collected from every transaction or process in real-time. IoT devices are the front line of the data collection process in manufacturing environments and also in the customer service departments. Any device with a chipset can potentially be connected to a network and begin streaming data 24/7. Complex algorithms allow performing predictive analytics from all conceivable angles. Machine learning (ML), a subset of AI, continues to upgrade workflows and simplify problem-solving. Companies now capture all the meaningful data surrounding their processes and problems to develop specific solutions for real challenges within the organization, improving efficiency, reliability, and sustainability. 


8 steps to being (almost) completely anonymous online

9 steps to make you completely anonymous online
The universe believes in encryption, a wise man once opined, because it is astronomically easier to encrypt than it is to brute force decrypt. The universe does not appear to believe in anonymity, however, as it requires significant work to remain anonymous. We are using privacy and anonymity interchangeably, and this is incorrect. An encrypted message may protect your privacy — because (hopefully) no one else can read it besides you and your recipient — but encryption does not protect the metadata, and thus your anonymity. Who you're talking to, when, for how long, how many messages, size of attachments, type of communication (text message? email? voice call? voice memo? video call?), all this information is not encrypted and is easily discoverable by sophisticated hackers with a mass surveillance apparatus, which is most these days. A final thought before we dig into specific technical tools: "Online" is now a meaningless word. Meatspace and cyberspace have merged. We used to live in the "real world" and "go online."


MIT finds massive security flaws with blockchain voting app

screen-shot-2020-02-14-at-1-54-47-pm.png
MIT researchers released a lengthy paper on Thursday that said hackers could change votes through the app, which has already been used in Oregon, West Virginia, Washington and Utah since 2018. "Their security analysis of the application, called Voatz, pinpoints a number of weaknesses, including the opportunity for hackers to alter, stop, or expose how an individual user has voted," MIT said in a news release. Additionally, the researchers found that Voatz' use of a third-party vendor for voter identification and verification poses potential privacy issues for users," the MIT press release said. In a blog post and call with reporters, Voatz defended its security practices and disputed the claims made by the MIT researchers. The company said the research paper was based on an "old version" of the app and that because of this, many of their claims were invalid.  "Voatz has worked for nearly five years to develop a resilient ballot marking system, a system built to respond to unanticipated threats and to distribute updates worldwide with short notice.


The time is now: How to manufacture your smart factory with Industrial IoT


Although the value of digital innovation is apparent, widespread adoption has been slow. This is due to a myriad of challenges. For many organisations, the biggest challenge is available talent — they simply don’t have the internal expertise to plan and execute digital innovation initiatives. With continued strain on IT budgets, organisations struggle to both manage the priorities of today and invest in the talent needed to help them transform their business. A new report by PwC identified hiring more Internet of Things (IoT) engineers and data scientists – while training the wider workforce in digital skills – as a key change CEOs must implement if they want to maximise the benefits from digitisation of manufacturing. Legacy technology is another factor holding manufacturers back. The average factory today is 25 years old, according to McKinsey, with machinery that’s approaching nine years old. Before any plans of integrating the IoT can begin at these plants, they must first upgrade equipment to enable digital readiness. Driven by immediate goals of reducing costs and returns, some manufacturing companies have deferred technology investment.


Microsoft's Windows Terminal: This is the final preview of its new command-line tool

terminal-command-args1.gif
This update brings new command-line arguments, such as the 'wt' execution alias. Users can now launch Terminal with new tabs and split panes, which open with preferred profiles and directories.  Terminal developers point out that the 'wt' design was "heavily inspired by that of the venerable and beloved GNU screen competitor" called tmux, a terminal for Unix-like systems. "You can wt new-tab, wt split-pane, wt new-tab -p Debian ; split-pane -p PowerShell until your heart's content," says Dustin Howett, an engineer lead at Microsoft. .. This release also has some goodies for PowerShell Core fans, with Terminal now automatically finding PowerShells on a system. "The Windows Terminal will now detect any version of PowerShell and automatically create a profile for you," explains Kayla Cinnamon, Windows Terminal program manager. "The PowerShell version we think looks best (starting from highest version number, to the most GA version, to the best-packaged version) will be named as 'PowerShell' and will take the original PowerShell Core slot in the dropdown."


Machine learning could lead cybersecurity into uncharted territory


Security threats are evolving to include adversarial attacks against AI systems; more expensive ransomware targeting cities, hospitals, and public-facing institutions; misinformation and spear phishing attacks that can be spread by bots in social media; and deepfakes and synthetic media have the potential to become security vulnerabilities. In the cover story, European correspondent Chris O’Brien dove into how the spread of AI in security can lead to less human agency in the decision-making process, with malware evolving to adapt and adjust to security firm defense tactics in real time. Should costs and consequences of security vulnerabilities increase, ceding autonomy to intelligent machines could begin to seem like the only right choice. We also heard from security experts like McAfee CTO Steve Grobman, F-Secure’s Mikko Hypponen, and Malwarebytes Lab director Adam Kujawa, who talked about the difference between phishing and spear phishing, addressed an anticipated rise in personalized spear phishing attacks ahead, and spoke generally to the fears — unfounded and not — around AI in cybersecurity.


Cloud Threat Report Shows Need for Consistent DevSecOps

Image: areebarbar - Adobe Stock
Despite efforts to educate developers on the importance of security, he says most developers believe their top priority is getting new features and functionality out as quickly as possible. “Yes, they’re supposed to engineer-in security but that doesn’t happen in many cases,” Chiodi says. “Many organizations have not yet embraced the concept of DevSecOps.” Unit 42’s research shows that forward leaning organizations such as consumer companies want to operate with cloud-scale, serving a multitude of users, while maintaining security. Chiodi cites Netflix as a company that does so because it fully integrated development, security, and operations. He suggests that security teams should also embrace infrastructure as code to automatically put written security policies into code. “That way when a developer creates a new cloud environment, if it has security standards coded right in, every time they create from that template it will be the same every time,” he says. Conversely, Chiodi says a template with vulnerabilities will repeat those vulnerabilities each time it is applied.


Election hacking: is it the end of democracy as we know it?

Election hacking: is it the end of democracy as we know it? image
According to David Emm, senior security researcher at Kaspersky Lab, “the term ‘hacking’ often gets used loosely to refer to different attempts to interfere in elections. These include using social media to try and shape opinions or stealing data held on compromised computers to try and shame political figures, as well as tampering directly with machines used to manage the voting process.” Mateo Meier, the founder and CEO of Artmotion, a cloud security company, agrees that “threat actors will use all available tools at their disposal to hack the outcome [of an election]. So it’s always likely to be a multi-pronged approach rather than a single data breach during election season.” In recent years, governments have made some serious accusations, and researchers have demonstrated how vulnerabilities in voting machines can be targeted. “Such vulnerabilities have also been seen in the real-world, with NSW election results being challenged over [the] iVote security flaw. Yet, it’s difficult to gauge the impact a successful real world attack would have.



Quote for the day:


"Leaders need to be optimists._ Their vision is beyond the present." -- Rudy Giuliani


Daily Tech Digest - December 04, 2019

10 bad programming habits we secretly love

9 bad programming habits we secretly love
For the last decade or so, the functional paradigm has been ascending. The acolytes for building your program out of nested function calls love to cite studies showing how the code is safer and more bug-free than the older style of variables and loops, all strung together in whatever way makes the programmer happy. The devotees speak with the zeal of true believers, chastising non-functional approaches in code reviews and pull requests. They may even be right about the advantages. But sometimes you just need to get out a roll of duct tape. Wonderfully engineered and gracefully planned code takes time, not just to imagine but also to construct and later to navigate. All of those layers add complexity, and complexity is expensive. Developers of beautiful functional code need to plan ahead and ensure that all data is passed along proper pathways. Sometimes it’s just easier to reach out and change a variable. Maybe put in a comment to explain it. Even adding a long, groveling apology to future generations in the comment is faster than re-architecting the entire system to do it the right way.



New AI.jpg
Advancements in explainable AI will continue in 2020 and beyond as new standards are developed around the technical definition of explainability, slowly followed by new technologies to address the explainability problem for business leaders non-technical audiences. In real estate, for example, offering a compelling explanation for why a mortgage application was rejected by an AI-driven platform will eventually be a necessity as AI adoption continues. Although we’ll see evolving technical tools and standards, progress for layperson tools will be slower with some narrow and domain-specific solutions (e.g., non-technical explainability for finance) emerging first. Like the general public’s understanding of ‘the web’ in the 90s, awareness, understanding and trust in AI will gradually increase as the capabilities and use of the technology spreads. Using sophisticated tooling to automate what we would call human creativity is now commonly referred to as AI. However, the term has become almost meaningless as “AI” as now covers everything from predictive analytics to Amazon Echo speakers. The industry needs to get their arms around real AI.

Volkswagen Is Accelerating One Of The World’s Biggest Smart-Factory Projects

VW factory
The biggest challenge, says Jean-Pierre Petit, Capgemini’s director of digital manufacturing, in an emailed comment to Forbes, is to “cross the chasm” from an initial pilot in a single plant to full-scale deployments, which is where the real benefits of digitization kick in. In particular, smart-factory projects require IT teams to work closely with “operational technology” (OT) groups managing machinery and other tech inside factories. Often, OT teams have become used to working quite independently and may resist IT’s efforts to drive change. By working closely together on VW’s industrial cloud project, Hofmann and Walker are sending a strong signal to their respective teams about the need for tight collaboration. The decision to launch pilots at several factories this year rather than just one was also deliberate. “You can put a ton of slides up [about the industrial cloud], but nobody is interested in that,” says Dirk Didascalou, one of the senior AWS executives involved in the project. “They need to see it working first.”



The question that helps businesses overcome unconscious bias

In the workplace, when you’re considering someone for a project or a promotion, turn that mantra into a question: What do I know about this person? You may have a feeling that this person is someone you do or don’t like or connect with, or a sense that this person “is ready for” and “deserves” the opportunity. Guided by that sense, you can easily pick and choose facts from their experience and work records to reinforce your decision. But when you start only with facts, a different picture can emerge. So drill down exclusively on what’s concrete. What projects did this person take part in or help lead, and how successful were they? What do the 360-degree assessments of this person show? What demonstrable impact did this person’s work have on sales, revenues, morale? Sometimes, the facts will back up a general sense that you have, or a description that someone else gave you. 


Programmers and developer teams are coding and developing software
It's almost a cliché to point out how so much of software today is built on or with open source. But Ian Massingham recently reminded me that for all the attention we lavish on back-end technologies--Linux, Docker containers, Kubernetes, etc.--front-end open source technologies actually claim more developer attention.  Much of the front-end magic open source software that developers love today was born at early web giants like Google and Facebook. Frameworks for the front make it possible for Facebook, Google, LinkedIn, Pinterest, Airbnb, and others to iterate quickly, scale, deliver consistent fast responsiveness and, in general, mostly delight their users. Indeed, their entire businesses depend on great user experiences. While venture investors historically have plowed their funds into back-end startups creating open source software, the same is not nearly as true with the front-end. Accel, Benchmark, Greylock, and other top-tier VCs made fortunes on backing enterprise open source software startups like Heroku, MuleSoft, Red Hat, and many more.


Migrating to GraphQL at Airbnb

Two GraphQL features Airbnb relied upon during this early stage were aliasing and adapters. Aliasing allowed mapping between camel-case properties returned from GraphQL and snake-case properties of the old REST endpoint. Adapters were used to convert a GraphQL response so that it could be recursively diffed with a REST response, and ensure GraphQL was returning the same data as before. These adapters would later be removed, but they were critical for meeting the parity goals of the first stage. Stage two focuses on propagating types throughout the code, which increases confidence during later stages. At this point, no runtime behavior should be affected. The third stage improves the use of Apollo. Earlier stages directly used the Apollo Client, which fired Redux Actions, and components used the Redux store. Refactoring the app using React Hooks allows use of the Apollo cache instead of the Redux store.  A major benefit of GraphQL is reducing over-fetching.


ASP.NET Core Microservices: Getting Started

Open avocado
Let's consider that we're exploring microservices architecture, and we want to take advantage of polyglot persistence to use a NoSQL database (Couchbase) for a particular use case. For our project, we're going to look at a Database per service pattern, and use Docker (docker-compose) to manage the database for the ASP.NET Core Microservices proof of concept. This blog post will be using Couchbase Server, but you can apply the basics here to the other databases in your microservices architecture as well. I'm using ASP.NET Core because it's a cross-platform, open-source framework. Additionally, Visual Studio (while not required) will give us a few helpful tools for working with Docker and docker-compose. But again, you can apply the basics here to any web framework or programming language of your choice. I'll be using Visual Studio for this blog post, but you can achieve the same effect (with perhaps a little more work) in Visual Studio Code or plain old command line.


Amazon Just Joined The Race To Dominate Quantum Computing In The Cloud

People pass by AWS (Amazon Web Services) stand during the...
AWS is something of a latecomer to the quantum cloud. IBM kicked off the trend several years ago, and since then a wave of other companies have unveiled cloud-based offerings, including Amazon’s partners D-Wave and Rigetti. Nor is AWS the first cloud provider to offer access to a range of other companies’ quantum hardware: Microsoft took that honor when it launched its Azure Quantum cloud offering last month. Yet AWS is likely to become a force to be reckoned with in the field because of a unique advantage it has over its rivals. ... AWS became a cloud powerhouse because many of the services it now offers were initially developed for Amazon’s vast commercial empire. The same scenario could well play out with quantum computing. For instance, one of the things quantum machines are particularly good at is optimizing delivery routes. AWS could—quite literally—road test a quantum-powered service that lets Amazon plot the most efficient directions for its delivery vehicles to take as they drop off parcels. The machines could also help Amazon optimize the way goods flow through its vast warehouse network.


Simplifying data management in the cloud

Simplifying data management in the cloud
Attempting to leverage the approaches and tools we use today will add complexity until the systems eventually collapse from the weight of it. Just think of the number of tools in your data center today that cause you to ask “what were they thinking?” Indeed, they were thinking much the same way we’re thinking today, including looking for tactical solutions that will eventually not provide the value they once did—and in some cases providing negative value.  I’ve come a long way to make a pitch to you, but as I’m thinking about how we solve this issue, an approach seems to pop up over and over as the best likely solution. Indeed, it’s been kicked around in different academic circles. It’s the notion of self-identifying data. I’ll likely hit this topic again at some point, but here’s the idea: Take the autonomous data concept a few steps further by embedding more intelligence with the data and more knowledge about the data itself. We would gain the ability to have all knowledge around the use of the data available by the data itself, no matter where it’s stored, or where the information is requested.


Survey: IT pros see career potential in as-a-Service trend

IT pros over 55 are most concerned with data complexity slowing down future data migrations. One question in the survey suggests that instead of tearing down data silos, cloud migration projects may create new ones. Seventy-seven percent of respondents saythat data is siloed between public and private clouds. Miller said to avoid this organizations need to choose the aaS model that makes the most business and policy sense. "Companies need to adopt a model that is not tied to one cloud or one premise but has the flexibility to move data and applications to where business needs are best met," he said. "If you adopt the right aaS model, you're breaking down the silos and driving overall efficiencies." While the majority of companies state that they have implemented at least some aaS projects, 66% of respondents say that IT pros avoid this new way of working out of fear of losing their jobs. The younger respondents (ages 22 to34) were most likely to think this at 70%, compared to 67% of 35 to 54yearolds and only 45% of 55+ year-olds.



Quote for the day:


"Leadership development is a lifetime journey, not a quick trip." -- John Maxwell


Daily Tech Digest - March 27, 2019

5 things you can do in 5 minutes to boost your internet privacy


For websites and services where you need to ensure the security of your account, like your bank, passwords alone simply are not enough anymore. In this scenario, you need two-factor authentication (2FA) -- specifically, the kind where a mobile app generates login codes for you. Not the kind where you are sent an SMS text message, because those can be intercepted or just fail to arrive. With app-based 2FA, you log into an app or website like normal, then you open an app that generates a special six-digit code every 30 seconds. This authentication app is synced with the other app or service so that your code matches the one that the main app or service expects to get. You enter the code from the authenticator app into the app or website that's asking for it, and then your login is complete. Google makes its own free authenticator app for iOS and Android. Unfortunately, there isn't a standardized method for setting up your account with 2FA. Amazon, PayPal, eBay and your bank will all use slightly different systems and terminology.



Scaling Microservices: Identifying Performance Bottlenecks

A bottleneck is the point in a system where contention occurs. In any system, these points usually surface during periods of high usage or load. Once identified, the bottleneck may be remedied bringing performance levels into an acceptable range. Utilizing synthetic load testing enables you to test specific scenarios and identify potential bottlenecks, although this only covers contrived situations. In most cases, it is better to analyze production metrics and look for outliers to help identify trouble on the horizon. Key performance indicators from your application include request/sec, latency, and request duration. Indicators from the runtime or infrastructure also include CPU time, memory usage, heap usage, garbage collection, etc. This list isn't inclusive, there may be business metrics or other external metrics which may factor into your optimizations as well.


The devil is in the data, but machine learning can unlock its potential

The devil is in the data image
Effective data governance can enable intelligent real-time business decision-making that will, in turn, drive organisations in a more profitable direction. One of the best approaches when it comes to unleashing big data’s potential is investing in a data lake: a central repository that allows organisations to collect everything — every bit of data, regardless of its structure and format — which can then be accessed, normalised, explored and enriched by users across multiple business units to reveal patterns across a shared infrastructure. The advantage of this approach is that organisations can gain end-to-end visibility of the enterprise data and actionable business insights. The disadvantage is that the data has to be kept up to date, which takes time and effort. Another downside is the GDPR compliance and data security risks that are associated with depositing the entirety of an organisation’s business-critical data into a data lake.


Insights for your enterprise approach to AI and ML ethics

The promise of AI is in augmenting and enhancing human intelligence, expertise and experience. Think helping a aircraft mechanic make better, more accurate and more timely repairs – not automating the mechanic out of the picture. But the scope of what you can do is tempered by inherent limitations in today’s AI systems. I like to frame this as a recognition that computers don’t “understand” the world the way we do (if at all). I don’t want to get into an epistemological discussion about the definition or nature of understanding, but here’s what I think is a very illustrative and accessible example. One common application of AI is in image processing problems. i.e., I show the machine an image – like what you might take with your phone – and the machine’s task is to report back what’s in the image. You build a system like this by shoving in thousands or millions or even billions of images to an AI program (such as a neural network) – you might hope that somehow, as a result of processing all of these images the software builds some kind of semantic representation of the world.


Alibaba's UC Browser can be used to deliver malware


Dr Web researchers note that for now UC Browser represents a "potential threat" but warn that all users could be exposed to malware due to its design.  "If cybercriminals gain control of the browser's command-and-control server, they can use the built-in update feature to distribute any executable code, including malware. Besides, the browser can suffer from MITM (man-in-the-middle) attacks," the security company notes. The MITM threat arises because UCWeb committed the security blunder of delivering updates to the browser over an unsecured HTTP connection. "To download new plug-ins, the browser sends a request to the command-and-control server and receives a link to file in response. Since the program communicates with the server over an unsecured channel (the HTTP protocol instead of the encrypted HTTPS), cybercriminals can hook the requests from the application," explains Dr Web.  "They can replace the commands with ones containing different addresses. ... "


Deep Learning for Speech Synthesis of Audio from Brain Activity

In three separate experiments, research teams used electrocorticography (ECoG) to measure electrical impulses in the brains of human subjects while the subjects listened to someone speaking, or while the subjects themselves spoke. The data was then used to train neural networks to produce speech sound output. The motivation for this work is to help people who cannot speak by creating a brain-computer interface or "speech prosthesis" that can directly convert signals in the user's brain into synthesized speech sound. The first experiment, which was run by a team at Columbia University, used data from patients undergoing treatment for epilepsy. The patients had electrodes implanted in their auditory cortex, and ECoG data was collected from these electrodes while the patients listened to recordings of short spoken sentences. The researchers trained a deep neural-network (DNN) using Keras and Tensorflow using the ECoG data as the input and a vocoder/spectrogram representation of the recorded speech as the target.


An inside look at Tempo Automation's IIoT-powered ‘smart factory’
“There could be up to 20 robots, 400 unique parts, and 25 people working on the factory floor to produce one order start to finish in a matter of hours,” explained Shashank Samala, Tempo’s co-founder and vice president of product in an email. Tempo “employs IIoT to automatically configure, operate, and monitor” the entire process, coordinated by a “connected manufacturing system” that creates an “unbroken digital thread from design intent of the engineer captured on the website, to suppliers distributed across the country, to robots and people on the factory floor.” Rather than the machines on the floor functioning as “isolated islands of technology,” Samala added, Tempo Automation uses Amazon Web Services (AWS) GovCloud to network everything in a bi-directional feedback loop. “After customers upload their design to the Tempo platform, our software extracts the design features and then streams relevant data down to all the devices, processes, and robots on the factory floor,” he said. This loop then works the other way: As the robots build the products, they collect data and feedback about the design during production.



Using value stream management and mapping to boost business innovation

Value stream mapping purists may argue that the above exercise is not the real process because traditional components such as the time metrics, activity ratios and future state were omitted. Fear not, these components are included in a full-blown formal value stream mapping exercise. However, teams such as Thrasher’s have made substantial improvements from shorter versions of the exercise by making work visible. The net result is a compelling change in the right direction. Value stream management is the practice of improving the flow of the activities that deliver and protect business value -- and prove it. It’s a nascent digital-concept that measures work artifacts in real time to visualize the flow of business value and expose bottlenecks to optimize business value. A significant strength of this practice centers around how and where work is undertaken. This activity is captured through the work items mentioned above in the toolchain, providing a traceable record of how software is planned, built and delivered.


Redis in a Microservices Architecture

Redis in a Microservices Architecture
Redis can be widely used in microservices architecture. It is probably one of the few popular software solutions that may be leveraged by your application in so many different ways. Depending on the requirements, it can act as a primary database, cache, or message broker. While it is also a key/value store we can use it as a configuration server or discovery server in your microservices architecture. Although it is usually defined as an in-memory data structure, we can also run it in persistent mode. ... If you have already built microservices with Spring Cloud, you probably have some experience with Spring Cloud Config. It is responsible for providing a distributed configuration pattern for microservices. Unfortunately, Spring Cloud Config does not support Redis as a property source's backend repository. That's why I decided to fork a Spring Cloud Config project and implement this feature. I hope my implementation will soon be included into the official Spring Cloud release, but, for now, you may use my forked repo to run it. It is available on my GitHub account: piomin/spring-cloud-config. 


Data visualization via VR and AR: How we'll interact with tomorrow's data

virtualitics.jpg
Data visualization in VR and AR could be the next big use case for the technologies. It's early days, but examples of 3D data visualizations hint at big changes to come in how we interact with data. Recently, I spoke with Simon Wright, Director of AR/VR for Genesys, about one such experiment. Genesys helps companies streamline their customer service experience with automated phone menus and chatbots, for example, but in his role Wright has a lot of latitude to push the boundaries of Mixed Reality technologies for enterprise customers. "One of the things I'm personally excited about is the ability to create hyper visualizations," Wright tells me. "We capture massive amounts of data, and we've created prototypes to almost magically bring up a 3D model of Genesys data. This is where there could be huge opportunities for AR, which has advantages over a 2D screen." For one recent project, Wright and his team wanted to project data pertaining to website analytics onto the wall of a restaurant in a beautiful way. "It started as a marketing-led project," he explains.




Quote for the day:


"Leadership to me means duty, honor, country. It means character, and it means listening from time to time." -- George W. Bush