Showing posts with label AIoT. Show all posts
Showing posts with label AIoT. Show all posts

Daily Tech Digest - March 15, 2025


Quote for the day:

"The most powerful leadership tool you have is your own personal example." -- John Wooden


Guardians of AIoT: Protecting Smart Devices from Data Poisoning

Machine learning algorithms rely on datasets to identify and predict patterns. The quality and completeness of this data determines the performance of the model is determined by the quality and completeness of this data. Data poisoning attacks tamper the knowledge of the AI by introducing false or misleading information and usually following these steps: The attacker manipulates the data by gaining access to the training dataset and injects malicious samples; The AI is now getting trained on the poisoned data and incorporates these corrupt patterns into its decision-making process; Once the poisoned data is deployed, the attackers now exploit it to bypass a security system or tamper critical tasks. ... The addition of AI into IoT ecosystems has intensified the potential attack surface. Traditional IoT devices were limited in functionality, but AIoT systems rely on data-driven intelligence, which makes them more vulnerable to such attacks and hence, challenge the security of the devices: AIoT devices collect data from different sources which increases the likelihood of data being tampered; The poisoned data can have catastrophic effects on the real-time decision making; Many IoT devices possess limited computational power to implement strong security measures which makes them easy targets for these attacks.


Preparing for The Future of Work with Digital Humans

For businesses to prepare their staff for the workplace of tomorrow, they need to embrace the technologies of tomorrow—namely, digital humans. These advanced solutions will empower L&D leaders to drive immersive learning experiences for their staff. Digital humans use various technologies and techniques like conversational AI, large language models (LLMs), retrieval augmented generation, digital human avatars, virtual reality (VR,) and generative AI to produce engaging and interactive scenarios that are perfect for training. Recall that a major issue with current training methods is that staff never have opportunities to apply the information they just consumed, resulting in the loss of said information. Digital humans avoid this problem by generating lifelike roleplay scenarios where trainees can actually apply and practice what they have learned, reinforcing knowledge retention. In a sales training example, the digital human takes on the role of a customer, allowing the employee to practice their pitch for a new product or service. The employee can rehearse in realistic conditions rather than studying the details of the new product or service and jumping on a call with a live customer. A detractor might push back and say that digital humans lack a necessary human element.


3 ways test impact analysis optimizes testing in Agile sprints

Code modifications or application changes inherently present risks by potentially introducing new bugs. Not thoroughly validating these changes through testing and review processes can lead to unintended consequences—destabilizing the system and compromising its functionality and reliability. However, validating code changes can be challenging, as it requires developers and testers to either rerun their entire test suites every time changes occur or to manually identify which test cases are impacted by code modifications, which is time-consuming and not optimal in Agile sprints. ... Test impact analysis automates the change analysis process, providing teams with the information they need to focus their testing efforts and resources on validating application changes for each set of code commits versus retesting the entire application each time changes occur. ... In UI and end-to-end verifications, test impact analysis offers significant benefits by addressing the challenge of slow test execution and minimizing the wait time for regression testing after application changes. UI and end-to-end testing are resource-intensive because they simulate comprehensive user interactions across various components, requiring significant computational power and time. 


No one knows what the hell an AI agent is

Well, agents — like AI — are a nebulous thing, and they’re constantly evolving. OpenAI, Google, and Perplexity have just started shipping what they consider to be their first agents — OpenAI’s Operator, Google’s Project Mariner, and Perplexity’s shopping agent — and their capabilities are all over the map. Rich Villars, GVP of worldwide research at IDC, noted that tech companies “have a long history” of not rigidly adhering to technical definitions. “They care more about what they are trying to accomplish” on a technical level, Villars told TechCrunch, “especially in fast-evolving markets.” But marketing is also to blame in large part, according to Andrew Ng, the founder of AI learning platform DeepLearning.ai. “The concepts of AI ‘agents’ and ‘agentic’ workflows used to have a technical meaning,” Ng said in a recent interview, “but about a year ago, marketers and a few big companies got a hold of them.” The lack of a unified definition for agents is both an opportunity and a challenge, Jim Rowan, head of AI for Deloitte, says. On the one hand, the ambiguity allows for flexibility, letting companies customize agents to their needs. On the other, it may — and arguably already has — lead to “misaligned expectations” and difficulties in measuring the value and ROI from agentic projects. “Without a standardized definition, at least within an organization, it becomes challenging to benchmark performance and ensure consistent outcomes,” Rowan said. 


Researchers astonished by tool’s apparent success at revealing AI’s hidden motives

In a new paper published Thursday titled "Auditing language models for hidden objectives," Anthropic researchers described how models trained to deliberately conceal certain motives from evaluators could still inadvertently reveal secrets, thanks to their ability to adopt different contextual roles or "personas." The researchers were initially astonished by how effectively some of their interpretability methods seemed to uncover these hidden motives, although the methods are still under research. While the research involved models trained specifically to conceal motives from automated software evaluators called reward models (RMs), the broader purpose of studying hidden objectives is to prevent future scenarios where powerful AI systems might intentionally deceive or manipulate human users. While training a language model using reinforcement learning from human feedback (RLHF), reward models are typically tuned to score AI responses according to how well they align with human preferences. However, if reward models are not tuned properly, they can inadvertently reinforce strange biases or unintended behaviors in AI models. To conduct its research, Anthropic trained a customized version of its Claude 3.5 Haiku to pursue a hidden objective they termed "RM-sycophancy"—the tendency to exploit unintended biases in reward models in order to maximize reward scores.


Strategies for Success in the Age of Intelligent Automation

Firstly, the integration of AI into existing organizational frameworks calls for a largely collaborative environment. It is imperative for employees to perceive AI not as a usurper of employment, but instead as an ally in achieving collective organizational goals. Cultivating a culture of collaboration between AI systems and human workers is essential to the successful deployment of intelligent automation. Organizations should focus on fostering open communication channels, ensuring that employees understand how AI can enhance their roles and contribute to the organization’s success. To achieve this, leadership must actively engage with employees, addressing concerns and highlighting the benefits of AI integration. ... The ethical ramifications of AI workforce deployment demand meticulous scrutiny. Transparency, accountability, and fairness are integral and their importance can’t be overstated. It’s vital that AI-driven decisions are aligned with ethical standards. Organizations are responsible for establishing robust ethical frameworks that govern AI interactions, mitigating potential biases and ensuring equitable outcomes. The best way to do this requires implementing standards for monitoring AI systems, ensuring they operate within defined ethical boundaries.


AI & Innovation: The Good, the Useless – and the Ugly

First things first: there is good innovation, the kind that genuinely benefits society. AI that enhances energy efficiency in manufacturing, aids scientific discoveries, improves extreme weather prediction, and optimizes resource use in companies falls into this category. Governments can foster those innovations through targeted R&D support, incentives for firms to develop and deploy AI, “buy European tech” procurement policies, and investments in robust digital infrastructure. The Competitiveness Compass outlines similar strategies. That said, given how many different technologies are lumped together in the AI category—everything from facial recognition technology to smart ad tech, ChatGPT, and advanced robotics—it makes little sense to talk about good innovation and “AI and productivity” in the abstract. Most hype these days is about generative AI systems that mimic human creative abilities with striking aptitude. Yet, how transformative will an improved ChatGPT be for businesses? It might streamline some organizational processes, expedite data processing, and automate routine content generation. For some industries, like insurance companies, such capabilities may be revolutionary. For many others, its innovation footprint will be much more modest. 


Revolution at the Edge: How Edge Computing is Powering Faster Data Processing

Due to its unparalleled advantages, edge computing is rapidly becoming the primary supporting technology of industries where speed, reliability, or efficiency aren’t just useful but imperative. Just like edge computing helps industries remain functional and up to date, staying informed with the latest sports news is important for every fan. Follow Facebook MelBet and receive real-time alerts, insider information, and a touch of comedy through memes and behind-the-scenes videos all in one place. Subscribe and get even closer to the world of sport! Edge computing relies on IoT as its most crucial component since there are billions of connected devices producing an immense and constant amount of data that needs to be processed right away. IoT devices in the residential sector, such as smart sensors in homes or Nest smart thermostats, as well as peripherals used for industrial automation in factories, all use edge computing. ... The way edge computing will function in the future is very exciting. With 5G, AI, and IoT, edge technologies are likely to become smarter, more widespread, and faster. Imagine a world where factories optimize themselves, smart traffic systems talk to autonomous vehicles, and healthcare devices stop illnesses from happening before they start.


Harnessing the data storm: three top trends shaping unstructured data storage and AI

The sheer volume of unstructured information generated by enterprises necessitates a new approach to storage. Object storage offers a better, more cost-effective method for handling significant datasets compared to traditional file-based systems. Unlike traditional storage methods, object storage treats each data item as a distinct object with its metadata. This approach offers both scalability and flexibility; ideal for managing the vast quantities of images, videos, sensor data, and other unstructured content generated by modern enterprises. ... Data lakes, the centralized repositories for both structured and unstructured data, are becoming increasingly sophisticated with the integration of AI and machine learning. These enable organizations to delve deeper into their data, uncovering hidden patterns and generating actionable insights without requiring complex and costly data preparation processes. ... The explosion of unstructured data presents both immense opportunities and challenges for organizations in every market across the globe. To thrive in this data-driven era, businesses must embrace innovative approaches to data storage, management, and analysis that are both cost-effective and compliant with evolving regulations. 


Open Source Tools Seen as Vital for AI in Hybrid Cloud Environments

The landscape of enterprise open source solutions is evolving rapidly, driven by the need for flexibility, scalability, and innovation. Enterprises are increasingly relying on open source technologies to drive digital transformation, accelerate software development, and foster collaboration across ecosystems. With advancements in cloud computing, AI, and containerization, open source solutions are shaping the future of IT by providing adaptable and secure platforms that meet evolving business needs. The active and diverse community support ensures continuous improvement, making open source a cornerstone of modern enterprise technology strategies. Red Hat's portfolio, including Red Hat Enterprise Linux, Red Hat OpenShift, Red Hat AI and Red Hat Ansible Automation Platform, provides robust platforms that support diverse workloads across hybrid and multi-cloud environments. Additionally, Red Hat's extensive partner ecosystem provides more seamless integration and support for a wide range of technologies and applications. Our commitment to open source principles and continuous innovation allows us to deliver solutions that are secure, scalable, and tailored to the needs of our customers. Open source has proven to be trusted and secure at the forefront of innovation


Daily Tech Digest - July 25, 2022

Digital presenteeism is creating a future of work that nobody wants

While technology has enabled more employees to work remotely – bringing considerable benefits in doing so – it has also facilitated digital presenteeism, Qatalog and GitLab concluded. One solution is to make technology less invasive and "more considerate of the user and completely redesigned for the new way of work, rather than supporting old habits in new environments" – although this may be easier said than done. According to Raud, current solutions require a "radical redesign that is more considerate of the user and prioritizes their objectives, rather than simply capturing our attention." Culture shift is also necessary for async work to become normalized, says Rauf. This comes from the top, and starts with trust: "When leaders send a message to their team, make clear whether or not it needs an immediate response or better yet, schedule updates to go out when people are most likely online. If I message a team member at an odd hour, I prefix a 'for tomorrow' or 'no rush', so they know it's not an urgent issue."


Confronting the risks of artificial intelligence

Because AI is a relatively new force in business, few leaders have had the opportunity to hone their intuition about the full scope of societal, organizational, and individual risks, or to develop a working knowledge of their associated drivers, which range from the data fed into AI systems to the operation of algorithmic models and the interactions between humans and machines. As a result, executives often overlook potential perils (“We’re not using AI in anything that could ‘blow up,’ like self-driving cars”) or overestimate an organization’s risk-mitigation capabilities (“We’ve been doing analytics for a long time, so we already have the right controls in place, and our practices are in line with those of our industry peers”). It’s also common for leaders to lump in AI risks with others owned by specialists in the IT and analytics organizations. Leaders hoping to avoid, or at least mitigate, unintended consequences need both to build their pattern-recognition skills with respect to AI risks and to engage the entire organization so that it is ready to embrace the power and the responsibility associated with AI.


The AIoT Revolution: How AI and IoT Are Transforming Our World

AIoT is a growing field with many potential benefits. Businesses that adopt AIoT can improve their efficiency, decision-making, customization, and safety. ... Increased efficiency: By combining AI with IoT, businesses can automate tasks and processes that would otherwise be performed manually. This can free up employees to focus on more important tasks and increase overall productivity. Improved decision-making: By collecting data from various sources and using AI to analyze it, businesses can gain insights they wouldn’t otherwise have. It can help businesses make more informed decisions, from product development to marketing. Greater customization: Businesses can create customized products and services tailored to their customers’ needs and preferences using data collected from IoT devices. This can lead to increased customer satisfaction and loyalty. Reduced costs: Businesses can reduce their labor costs by automating tasks and processes. Additionally, AIoT can help businesses reduce their energy costs by optimizing their use of resources. Increased safety: By monitoring conditions and using AI to identify potential hazards, businesses can take steps to prevent accidents and injuries.


It's time for manufacturers to build a collaborative cybersecurity team

Despite the best laid plans, bear in mind that these are active, interconnected and dynamic systems. It’s impossible to separate physical and cybersecurity elements, as their role in business operations is so foundational. As the landscape for new technologies and best practices change, adapt along with it. Ensure the lines of communication are open, management maintains involvement in the process, and all the key parties across IT and OT are committed to working collaboratively to strengthen every element of security. These tenets will help manufacturing organizations stay nimble in the face of an ever-changing security landscape. As the convergence of IT and OT continues, the risk of cyberthreats will continue to rise along with it. Building a collaborative security team across both IT and OT will help to reduce organizational risk and fortify critical infrastructure. By involving leadership, setting a plan, and staying adaptable as things change, security leaders will be armed with a comprehensive security approach that supports near-term needs and offers long-term business sustainability.


Why diverse recruitment is the key to closing the cyber-security skills gap

When it comes to mitigating the ever-evolving cyber threat, diversity is a crucial, but often overlooked, factor. As cyber attacks are becoming increasingly culturally nuanced, it is important that we meet the challenge by drawing from a wide range of backgrounds and life experiences. Cyber attacks come from everywhere - from a wide range of ages, locations, and educational backgrounds - so our responders should too. Perceptions of cyber security often see it as revolving around highly complex technology and driven mainly by this. While tech clearly plays a crucial role in mitigating cyber attacks, successfully countering them would not be possible without the role performed by people. This is enriched hugely by having a workforce which covers as many educational and socio-economic backgrounds as possible. In making a concerted effort towards a more diverse workforce, the cyber-security industry will be able to gain a deeper awareness of the cultural nuances that underlie cyber attacks. It’s important to fully understand what we mean by diverse hiring. Considering entry routes into the industry is a big part of attracting a broader range of demographics. 


You have mountains of data, but do you know how to climb?

We have more data than ever before, but it is not enough to merely accumulate it. Dedicate time and resources to establishing digital governance to ensure the data you are using is clean, consistently implemented, and universally understood. ... The tech team is not solely responsible for the quality of our data—we all need to take ownership of and champion the data we use. Visualization tools bridge the gap between the tech team and the business team, doing away with barriers to entry and enabling end-to-end analytics. In this way, you can empower employees to immerse themselves in and take ownership of the data at hand. Users no longer have to submit a request to the tech team to create a report and twiddle their thumbs until it comes back. They can now take initiative and do it themselves, creating a more streamlined process and a more informed group of employees who can work quickly to make data-driven decisions. Furthermore, when you empower people to take control of their data and ask their own questions, they may uncover new insights they would never have found when presented with pre-packaged reports.


Software Supply Chain Concerns Reach C-Suite

From Cornell's perspective, DevOps — or hopefully, DevSecOps groups — should really spearhead the management of software supply chain risk. "They are the ones who own the software development process, and they see the code that is written," he says. "They see the components that are pulled in. They watch the software get built. And they make it available to whoever is next on down the line." Given this vantage point, they can help to impact — in a positive way — an organization's software supply chain security status by implementing good policies and practices around what open source code is included in their software and when those open source components are upgraded. "Forward-leaning DevSecOps teams can take advantage of their automation and testing to start pushing for more aggressive component-upgrade life cycles and other approaches that help minimize technical debt," he explains. He says they’re also in a position and own the tooling to help generate SBOMs that they can then provide to software consumers who are in turn looking to manage their supply chain risk.


Know Your Risks – and Your Friends’ Risks, Too

Identifying risks and documenting response actions are only part of the equation. Crucial to the overall C-SCRM process is the communication and education of all parties involved about organizational risks and how to respond. Organizations must ensure that all personnel and third-party partners are trained on supply chain risks, encourage awareness from the top down, and involve partners and suppliers in organization-wide tests and assessments of response plans. Organizations should establish open communications with their supplier partners about risk concerns and encourage partners to do the same in return. The general idea is individual strength through community strength. As an organization matures its C-SCRM (or overall cybersecurity) process, lessons learned and best practices should be shared along the way to help bolster others’ programs. The concept of C-SCRM is not a new one. In fact, there are many sources that have provided guidance on the topic over the years. The National Institute of Standards and Technology (NIST) has a Special Publication (SP) 800-161 and an Internal Report (IR) 8276 on the subject. 


3 data quality metrics dataops should prioritize

The good news is that as business leaders trust their data, they’ll use it more for decision-making, analysis, and prediction. With that comes an expectation that the data, network, and systems for accessing key data sources are available and reliable. Ian Funnell, manager of developer relations at Matillion, says, “The key data quality metric for dataops teams to prioritize is availability. Data quality starts at the source because it’s the source data that run today’s business operations.” Funnell suggests that dataops must also show they can drive data and systems improvements. He says, “Dataops is concerned with the automation of the data processing life cycle that powers data integration and, when used properly, allows quick and reliable data processing changes.” Barr Moses, CEO and cofounder of Monte Carlo Data, shares a similar perspective. “After speaking with hundreds of data teams over the years about how they measure the impact of data quality or lack thereof, I found that two key metrics—time to detection and time to resolution for data downtime—offer a good start.”


How Optic Detects NFT Fraud with AI and Machine Learning

The NFT space has ongoing issues with fraud, including through bad actors wholesale lifting art from one project and using it in a second project — a process often referred to as “copyminting.” They are derivative projects that have a few too many similarities to the original project to be considered anything other than a ripoff. While most of these duplicate projects do very little sales volume relative to the original, they may damage the underlying brand, contribute to the overall distrust of the NFT space, or trick less savvy buyers into spending money on something that’s the jpg equivalent of a street vendor shilling fake Rolex watches. To help combat this fraud, a few companies are emerging that specialize in fraud detection in NFTs. They tend to leverage blockchain data to help determine which project came first and apply some image detection to find metadata matches. One of these solutions is Optic, which uses artificial intelligence and machine learning to analyze the images associated with an NFT, which helps NFT marketplaces and minting platforms catch copies and protect both creators and buyers.



Quote for the day:

"The test we must set for ourselves is not to march alone but to march in such a way that others will wish to join us." -- Hubert Humphrey

Daily Tech Digest - April 11, 2020

Expressing The BIAN® Reference Model For The Banking Industry In The Archimate® Modeling Language


The expression of the BIAN model in ArchiMate has been a joint effort by BIAN and The Open Group, the stewards of the ArchiMate standard. The full details of this mapping can be found in the document “ArchiMate® Modeling Notation for the Financial Industry Reference Model: Banking Industry Architecture Network (BIAN)” published by The Open Group. To explain the use of BIAN in the ArchiMate language, The Open Group has published a case study whitepaper co-authored by one of us (Patrick), which uses the fictitious but realistic Archi Banking Group as an example. In this blog, we want to give you an impression of what this is about, picking and choosing some of the juiciest bits. For the full case study, please refer to the whitepaper. Archi Banking Group is the result of the acquisition of several banks in different countries, as most international banks are nowadays. This has come with the typical challenges of integration and cost control. In particular its fragmented information is becoming a compliance risk and the challenges of ‘open banking’ (e.g. PSD2) are difficult to meet. 



Development Versus QA: Ending the Battle Once and for All


The reason why minimizing blame is the number one priority for QA engineers is that in the QA realm, there is a general acceptance that bugs are always going to make it to production, no matter what. This is something we expect because a 100% guaranteed bug-free product would take years to ship rather than weeks, and would therefore be economically unviable. Since they know there will be problems to deal with no matter what they do, they want to show that they did everything in their power to prevent those problems. Naturally, they want to write as many tests as possible to minimize the risk of bugs that they should have caught. But since it’s impossible to write an infinite amount of tests, they have to prioritize what to test for. A QA team is given no data by which to prioritize what to test, so this prioritization is essentially a guessing game. It may be an educated guessing game based on experience and expertise, but it’s still predicting what users are most likely to do on an application without objective data as to what they really care about and how they really will use the application.



Microsoft Teams Promises Great Video Calls: No More Typing Or Dog Noises

In this photo illustration a Microsoft teams logo is seen...
As reported by Venture Beat, Microsoft has promised AI-enhanced innovations which will be able to suppress background noise – in real time – so your call can continue smoothly. Instead of merely reducing the impact that an air conditioning unit has on the call, Teams will aim to suppress other noises not normally covered, such as doors slamming, over-excited typing on a computer keyboard or my beloved pooch having an inconvenient moment. The keyboard is a case in point. If you’re taking notes during an interview, you ideally don’t want that clickety-clack noise to intrude on the conversation. It’s those noises which aren’t “stationary” as Microsoft says, that are hard to suppress without AI. It takes hundreds of hours of data to work out what’s desirable and what’s not, using audio books to represent voices and then other sources to create those pesky noises. All of which leads to the creation of neural network to start the AI working on the data to sort out what should be heard and what shouldn’t. The power of the cloud can be leveraged to help, providing fast, real-time analysis of what’s going on and deciding what should be heard by the person at the other end of the call and what shouldn’t.



Scientists develop AI that can turn brain activity into text

The system was not perfect. Among its mistakes, “Those musicians harmonise marvellously” was decoded as “The spinach was a famous singer”, and “A roll of wire lay near the wall” became “Will robin wear a yellow lily”. However, the team found the accuracy of the new system was far higher than previous approaches. While accuracy varied from person to person, for one participant just 3% of each sentence on average needed correcting – higher than the word error rate of 5% for professional human transcribers. But, the team stress, unlike the latter, the algorithm only handles a small number of sentences. “If you try to go outside the [50 sentences used] the decoding gets much worse,” said Makin, adding that the system is likely relying on a combination of learning particular sentences, identifying words from brain activity, and recognising general patterns in English. The team also found that training the algorithm on one participant’s data meant less training data was needed from the final user – something that could make training less onerous for patients.


IBM, Open Mainframe Project launch initiative to help train COBOL coders


Despite its age, COBOL is reliable and is still widely used -- there's an estimated 220 billion lines of COBOL still in use today. IBM, one of the founding organizations behind COBOL, continues to offer mainframes compatible with the language. The issue with COBOL now is that there are few programmers left with the skills to maintain legacy COBOL applications. Specifically, state agencies are struggling to find actively working COBOL engineers who can update their unemployment benefit systems to factor in new parameters for unemployment eligibility. To address this skills gap, IBM and Linux Foundation's Open Mainframe Project have launched a new program to help connect states with programmers who have COBOL language skills that are proving key in the push to manage the surging number of unemployment claims nationwide. ... "We've seen customers need to scale their systems to handle the increase in demand and IBM has been actively working with clients to manage those applications," said Meredith Stowell, VP of IBM Z Ecosystem. "There are also some states that are in need of additional programming skills to make changes to COBOL.


World Economic Forum explores blockchain interoperability

blockchain interoperability
Blockchain interoperability is often viewed as a technical challenge, but there’s a lot more to it than that. The WEF divides into the Business, the Platform, and the Infrastructure.  The business aspect encompasses the governance of the blockchain and trust between the two networks, as well as data standardization. To share data, it has to be standardized. But often this homogeneity is focused within a single network as opposed to across networks. Other business aspects include incentives and the legal framework, which can be a bigger challenge across jurisdictions. The platform refers to the blockchain protocol, consensus mechanism, smart contract languages, and how users are authorized and permissioned. And the infrastructure looks at the hosting of servers in hybrid clouds, managed blockchains, and whether there are potentially proprietary components that might hinder interoperability. Different projects that implement interoperability are explored, mostly for public blockchains, include the well-known projects Cosmos and Polkadot. For enterprise blockchain, the WEF referred to Hyperledger Quilt, the open source implementation of Ripple’s Interledger, as well as the Corda Settler.


Cybersecurity officials say state-backed hackers taking advantage of pandemic

Silhouettes of laptop users are seen next to a screen projection of binary code are seen in this picture illustration taken March 28, 2018.
“Bad actors are using these difficult times to exploit and take advantage of the public and business,” Bryan Ware, CISA’s assistant director for cybersecurity, said in a statement. The agencies warned that hackers were also exploiting growing demand for work-from-home solutions by passing off their malicious tools as remote collaboration software produced by Zoom and Microsoft. Hackers are also targeting the virtual private networks that are allowing an increasing number of employees to connect to their offices, the agencies said. ... “Crowdsourced security platforms are built to simultaneously enable a remote workforce and help organizations maximize their security resources while benefiting from the intelligence and insights of a ‘crowd’ of security researchers,” Bugcrowd CEO Ashish Gupta told VentureBeat. “In the current environment, a lot of companies don’t have the required resources to secure and test remote environments where the majority of business is now taking place.”


AIoT and Intelligence on the Edge


Edge intelligence allows a high level of data to be processed and analyzed, and for decisions to be made locally, without being sent to the cloud. Take for example a self-navigating drone, instead of relying on a service hosted on the cloud to tell the drone where to go next, the drone itself is now able to decide its own path in the field, even when connections to cloud hosted services are not reliable. ... For architects and program leads working on such initiatives within the company, it’s mainly a mindset change in regards to how the solution is designed, including capabilities of the devices on the edge and where the decision-making step in a process happens. Feasibility for scenarios such as the drone automatically calculating its own path instead of relying on a cloud-hosted service are now better than before, and a few demos or proof-of -concept attempts could now move many of these stories from the backlog and bring implementation dates forward. While AIoT in its re-imagined, converged form may be new, the two original fields (AI and IoT) that merged to create AIoT are both mature and well into mainstream adoption. 


What do CISOs want from cybersecurity vendors right now?

CISOs cybersecurity vendors
To companies providing cybersecurity solutions, the polled executives advised to avoid sales pitches that involve fear-mongering, to dial down cold calls and emails, and to concentrate on nurturing existing relationships. “Messaging ought to be geared towards impacting an enterprise’s bottom line or community, rather than attempting to fearmonger or stoke panic over a situation already causing CISOs enough anxiety,” YL Ventures explained. “Cybersecurity executives feel quite unanimously about the marketing frenzy and, according to our sources, are compiling a ‘black list’ of vendors guilty of using this tactic.” Companies should concentrate on discovering what they can do to help their existing customers and discussing their customers’ experiences. Not only will this improve customer relations, but also provide helpful information that can inform the vendor’s future plans. Last but not least, vendors should consider making goodwill gestures. “Profiteering off of a world-wide tragedy will do vendors little service in the eyes of prospective customers. 41% of the CISOs we consulted with praised technology companies using their services to help other businesses and advised entrepreneurs to follow in their lead instead,” YL Ventures noted.


Why architecting an enterprise should not be IT-centric


The first and most important reason that architecture should not be IT-centric is the same reason why more and more IT-functions are merged with ‘business functions’. A popular metaphor was (is?) that information should be like water coming out of a faucet. In that metaphor the IT department is responsible for developing IT to deliver the information need from the ‘business’. The business aks for ‘information provisioning’, the IT department delivers. This ‘what — how’ division has been the reason for non-functioning business / IT cooperation in lots of organisations in the past decades. An enterprise in general does not need ‘information’ as such, but it needs resources and technology to execute business processes. The type of technology is not very important from a business perspective. It could be humans doing the job, mechanic or digital technology and mostly it will be a mesh of all these types of technology. As a side remark. Yes, data as a source for doing data intelligence could be seen as a product delivered by an organisational department, but that is only a small part of the totality of digital technology.



Quote for the day:


"Conviction is worthless unless it is converted into conduct." -- Thomas Carlyle