The big question is, what do you actually automate as individual users? People, more often than not have no idea what it is they actually want to do with the software. This will change as eureka moments are shared and Windows users begin to understand that they no longer have to produce weekly reports, update databases or populate CRM systems. But this is not a disruptive moment in the history of RPA, it's more akin to a dripping tap eventually filling up the bath. Microsoft is late to the game, very late. While RPA is new to some and still unheard of by others, it's now considered as a fairly mainstream business software solution, propped up by process discovery, mining and a plethora of consulting firms all benefiting from the success of the technology. Of course the very entrance of Microsoft into the market is much more than 'a bit of news' but it really just validates the potential for RPA to evolve further and might even justify the hype that some find so distasteful and worthy of their online wrath. Contrary to what the detractors are claiming, most, if not all of the large RPA implementations are delivering high ROI. Technology tends to succeed or fail based on what it delivers.
Sigstore uses the OpenID authentication protocol to tie certificates to identities. This means a developer can use their email address or account with an existing OpenID identity provider to sign their software. This is different from traditional code signing that requires obtaining a certificate from a certificate authority (CA) that's trusted by the maintainers of a particular software ecosystem, for example Microsoft or Apple. Obtaining a traditional code signing certificate requires going through special procedures that include identity verification or joining a developers program. The sigstore signing client generates a short-lived ephemeral key pair and contacts the sigstore PKI (public-key infrastructure), which will be run by the Linux Foundation. The PKI service checks for a successful OpenID connect grant and issues a certificate based on the key pair that will be used to sign the software. The signing event is logged in the public log and then the keys can be discarded. This is another difference to existing code signing because each signing event generates a new pair of keys and certificate. Ultimately, the goal is to have a public proof that a particular identity signed one file at a particular time.
“The impact of digital twins goes beyond the asset and extends to logistics,” Weiss explains.” This has significant implications for mission readiness if an asset deploys to remote or hazardous locations. By understanding the condition of any given asset at any given time, sustainment leaders can anticipate maintenance requirements, ensuring the right components and personnel are in the right place at the right time, making real the concept of condition-based maintenance plus (CBM+).” CBM – sometimes called Predictive Maintenance (PdM) - is a maintenance methodology that utilizes sensors to assess the health of the system. The health information, in addition to other inputs, helps to drive the maintenance activities. In a CBM environment, operating platforms, embedded sensors, inspections, and other triggering events determine when restorative maintenance tasks are required based on evidence of need. Combined, these end benefits drive affordable and resource-optimized sustainment operations and data-informed decisions to significantly increase operational availability. In a single-use case assessment of two aircraft engine components, a U.S. military service branch reported potential savings of approximately $42 million annually by using a digital twin.
Changing company culture is just one step on the path to successful digital transformation. It requires the resources, knowledge, and skills to support and sustain the initiative. Providing sufficient training and targeted reskilling or upskilling for current employees is a necessary part of building the workforce of tomorrow. Digital literacy dovetails with key soft skills, including adaptability, problem-solving, effective communication, and emotional intelligence, which all correlate with effective teamwork and leadership. At the same time, investing in the workforce’s digital literacy helps nurture employee engagement and can improve retention. The costs of recruiting and training new staff can be substantial, so assessing and augmenting the current workforce’s digital literacy should be a central feature of any strategy. Building those foundational skills helps support adoption. Cybersecurity is another crucial focus area, and building a culture of security is central to organizational resilience. That’s why it’s no surprise to see that 59% of CFOs plan to increase budgets for IT in 2021. The number and sophistication of cyber threats have risen substantially, and a company is only as secure as its weakest link. This is especially true for the digital workplace.
Things are always more complicated than we would hope, especially when it comes to the promises of new technologies. After more than 30 years as a consultant and analyst in emerging technologies, I’ve learned that the implementation of promises is always more difficult and problematic than we expect. How is this playing out with machine learning models? First, there is the basic problem of the data itself. Without a clear cycle of data management, machine-learning models may cause more problems than they solve. You must then think about how a business can explain how decisions derived from a machine learning model were made. Here are a few questions that may arise: What are the hidden and not so hidden biases with the data selected to create a model?; What are the ethical challenges that must be managed as we move to this brave new world of Artificial Intelligence?; For example, how can a manager explain what business processes and rules are behind a model? Are those rules ethical?; Do these models adhere to either corporate or governmental requirements?; How does an organization know if there is bias in a model that can impact business outcomes?
Security firms including Red Canary and FireEye are now tracking the exploit activity in clusters and anticipate the number of clusters will grow over time. ESET researchers have detected at least ten APT groups using the critical flaws to target Exchange servers. When used in an attack chain, the exploits for these vulnerabilities could allow an attacker to authenticate as the Exchange server and deploy a Web shell so they can remotely control the target server. When Microsoft released patches for the four Exchange server zero-days, it attributed the activity with high confidence to a Chinese state-sponsored group called Hafnium. Now, as researchers observe Web shells stemming from suspected Exchange exploitation, they believe far more groups are responsible for the growth in attack activity. In a blog post released March 9, Red Canary analysts report none of the clusters they observe significantly overlap with the group Microsoft calls Hafnium; as a result, they are now tracking these clusters separately. "We don't know who is behind these clusters – we aren't sure if it's the same adversaries working together or different adversaries completely," the researchers write. "We're focusing narrowly on what we observe on victim servers for our clustering."
Clearview AI, the controversial firm behind facial-recognition software used by law enforcement, is being sued in California by two immigrants' rights groups to stop the company's surveillance technology from proliferating in the state. The complaint, which was filed Tuesday in California Superior Court in Alameda County, alleges Clearview AI's software is still used by state and federal law enforcement to identify individuals even though several California cities have banned government use of facial recognition technology. The lawsuit was filed by Mijente, NorCal Resist, and four individuals who identify as political activists. The suit alleges Clearview AI's database of images violates the privacy rights of people in California broadly and that the company's "mass surveillance technology disproportionately harms immigrants and communities of color." ... The lawsuit is the latest attempt by grassroots groups to clamp down on facial-recognition software, which is not widely regulated in the United States. In the absence of clear federal rules regarding the usage of the technology, a number of cities — such as San Francisco, Boston, and Portland, Oregon — have banned the technology in some capacity.
By integrating tokenized assets and equity into DeFi protocols, the functions of, for instance, existing liquidity pools and interest rate based protocols can be effectively applied to real-world assets. Thus, mainstream adoption for DeFi is fostered. But with tokenization, even features such as atomic swaps or flash loans become possible for real-world assets. This will substantially rejuvenate the current illiquidity in the DeFi sector and also enable a far more diverse range of opportunities for investing from the perspective of CeFi (centralized finance). Tokenization efficiently bridges the gap between these two worlds. Semi-automated lending and trading systems atop blockchain networks are the future of finance. Here, cutting out intermediaries while democratizing access through blockchain-based governance models make room for new solutions. In theory, anyone with a connection to the internet can leverage DeFi protocols. Individuals worldwide can now conduct financial transactions more efficiently and at lower costs, especially when compared to the global remittances market. Currently, this still requires users to own crypto-assets.
Internet of Things has truly lived up to its hype. Between 2014 and 2019, the number of businesses employing IoT technologies grew from 13 percent to 25 percent. According to McKinsey, the number of IoT-connected devices would touch 43 billion by 2023. The blooming IoT domain has opened up a world of possibilities for skilled engineers and professionals. The growing demand has widened the supply-demand gap. An Immersat Research Programme research showed that as many as 47 percent of surveyed companies lacked appropriate IoT skills and were forced to outsource projects. A suboptimal IoT workforce means up to 75 percent of the IoT projects take twice as much time to complete, as noted by a Gartner survey. There are no fixed eligibility criteria to enter this field. However, an engineering graduate specialising in IT, computer science, electrical and electronics are better fits. A few colleges provide undergraduate courses in IoT or have computer science specialisation with IoT as a major subject. ... Since an IoT engineer deals with a large amount of data that is often unreliable, the ability to manage such data gains paramount importance.
Like water, attackers are always trying to find a path of least resistance. It should not be surprising that instead of trying to get through sophisticated defenses on the infrastructure side, they explore vulnerabilities in web, mobile applications and web services. ... A potential attacker could exploit a vulnerability by executing an API call with a specially crafted parameters/payload, and this may lead to an injection attack and result in the attacker obtaining sensitive data (e.g., financial information) or executing unauthorized actions (e.g., transferring money to a bank account controlled by the attacker). In many cases, Dzihanau notes, such actions would be difficult to distinguish from typical application usage. ... “Unfortunately, automation is not everything, and developers need to obtain the necessary knowledge and make security part of their day-to-day work. Security aspects need to be addressed not only during testing but continuously in design development and deployment too. While terms security by design and shift-left are well known, organizations only start to realize now what changes and implications this brings to the development process.”
Quote for the day:
"Leadership is working with goals and vision; management is working with objectives." -- Russel Honore