Users of Google's Chrome web browser are being urged to install the latest update immediately to patch two security vulnerabilities, one of which is already being exploited in the wild. As the National Cyber Security website reports, the two high severity vulnerabilities are known as CVE-2019-13720 and CVE-2019-13721 and classed as "use-after-free" vulnerabilities. That means they allow for data in memory to be corrupted by a remote hacker and then the execution of arbitrary code allowed. In other words, they allow for a PC to be hijacked. One of the vulnerabilities is to do with Chrome's audio component, while the other is for the PDFium library, which Chrome uses for PDF document generation and rendering. Kaspersky researchers Anton Ivanov and Alexey Kulaev have already detected the audio component compromise being used in the wild, hence the urgency for users to update. The latest version of Chrome released today to fix the security vulnerabilities is version 78.0.3904.87 and it's available for Windows, Mac, and Linux.
Studies have found, for example, that high levels of relative power often correspond with increased neural activity in the brain’s behavioral activation system (BAS). BAS is a pattern of neural circuits posited by psychologist Jeffrey Alan Gray in 1970 as an explanation for how the brain processes the experience of rapid reward. Nestled deep in the brain, these circuits include the basal ganglia and parts of the prefrontal cortex. They have been known to release the neurotransmitter dopamine, associated with pleasure. If you are a leader, the increase in BAS activity produced by the power of your role can make you more effective in noticeable ways — specifically, by increasing your attention to goal-relevant information, your comfort with innovation and risk taking, and your ability to think at a visionary level. Gray and subsequent psychologists have also posited that when the BAS is engaged, another system, called the behavioral inhibition system (BIS), tends to be more idle. The BIS, generally associated with in the brain’s septohippocampal system, is associated with feelings of anxiety, sensitivity to punishment, frustration, and risk aversion.
HR technologies are being invested in, now more than ever, by a myriad of businesses. The 2019 HR Technology Market Report outlines some key findings on this front – investments into HR technology have increased by 29 per cent, resulting in the market for HR technologies growing by a noteworthy 10 per cent. Also highlighted were new trends towards artificial intelligence, a shift away from engagement towards productivity in core systems, and the recognition of the role the gig workforce plays. Artificial intelligence is far more than a buzzword designed to impress the board of directors when it comes to HR. Unilever offers a prime example of this – given that it recruits upwards of 30,000 individuals a year, it should come as no surprise that a significant amount of capital and manpower must be devoted to sifting through applications to identify the best people for the job. This changed dramatically with its AI-powered solution: partnering with Pymetrics, the business developed a platform that would test the candidate’s aptitude, and even process 30-minute interview videos, using natural language processing and body language analysis to assess their suitability for a given role.
While technology companies often have sophisticated AI capabilities, medtech companies have deep expertise in the clinical development of medical algorithms, such as translating data from an EKG lead into meaningful output that a physician can use. This clinical expertise and credibility with physicians could be useful to potential consumer tech partners. Moreover, consumer technology companies’ data science and AI expertise— combined with medtech’s ability to develop meaningful medical applications and algorithms—could lead to powerful offerings that will improve patient health. ... Regulators are working to develop regulatory guardrails as AI applications take off in medtech. Earlier this month, the US Food and Drug Administration (FDA) released a draft framework detailing the types of AI/machine learning-based algorithm changes in medical devices that might be exempt from pre-market submission requirements.9 As part of the Consumer Technology Association’s AI initiative, AdvaMed, Google, Doctor On Demand and other organizations will work to develop standards and best practices for AI use cases in medicine and health.
5G can also assist manufacturers in optimising their operations by using IoT sensors to monitor the performance of equipment and workers so improvements in working processes can be identified. In fact, research from IDC found that IoT technology can boost productivity in the supply chain by 15%. Utilising IoT-based monitoring can also enable predictive maintenance, reducing overall maintenance costs by up to 30%, says Accenture. What’s more, the incredibly low latency offered by next-generation connectivity can enable remote operation of equipment. This enables automation of machinery and the use of untethered robots, helping to make factories safer. 5G infrastructure can also help unlock actionable insights from the vast amounts of data generated by the ever-growing number of connected devices. Data analytics can bring operational efficiencies and cost savings while logistics can also be enhanced with real-time tracking data. Many manufacturing businesses will make use of private, on-premise 5G networks.
The key delivery metrics require surfacing data from a myriad of sources including; work-flow management tools, code repos, and CI/CD tools – as well as collecting quant feedback from the engineering team themselves (via collaboration hubs). The complexity of the data and multiple sources make this sort of data collection very time consuming to do manually and really requires an end-to-end delivery metrics platform to do at scale. Delivery metrics platforms are available which consist of a data layer to collate and compile metrics from multiple data sources and a flexible UI layer to enable the creation of custom dashboards to surface the metrics in the desired format. ... Done well, Root Cause RAG Reports can be a really effective means of presenting our (more accurate) forecasts in a way that stakeholders can understand and therefore can be an important step in reducing lateness and bringing the technology team and the internal client much closer together. As discussed however, it relies on an understanding of the metrics that actually determine project lateness and a means of collecting those metrics.
Open source allows people to collaborate and promotes a meritocracy of ideas In doing so, Kubernetes helps companies harness processing power and run their software more efficiently no matter how many machines they have and no matter how many competing cloud services they’re using. This is especially useful for companies without a refined IT service as it makes managing commercial software cloud servers much less of a headache. These abilities are all underpinned by open source code so it enables a company to build a system tailored to their needs, which will evolve as it becomes more successful and expands its operations. Originally open sourced by Google in 2014, Kubernetes has remained relevant technology because of the open source community that supports it and it’s consistently one of the top projects on GitHub, the open source cloud server used by developers to store and manage code. Twitter, Huawei, Intel, Cisco and IBM are just some of the businesses that have been involved in its development over the years thanks to the fact that Google donated it to the Cloud Native Computing Foundation, a collective of open source development advocates.
Non-Agile methodologies make an implicit assumption that requirements are final and that a change management process can accommodate only minor variations in them. Design requirements, also called acceptance criteria, are subject to constant, planned change in Agile iterations. Agile enables product managers to demonstrate working software and elicit customer feedback. If the user needs aren't met, the product owner and developers make change requests to the application code, and possibly alter the delivery schedule. Thus, change management is an inherent part of the Agile software development process. The ability to demo working applications means you can design for customer expectations. Rather than create and develop an application workflow based on only written requirements or feature descriptions, keep the customer informed of the application and its functionality. If a development team spends six months working on an app and delivers it on time to the customer, that's a good thing -- as long as that application aligns with the customer's expectations. If it doesn't meet user needs, the delivery is not successful. Keep the customer in the loop and manage requirement changes accordingly for long-term application success.
The carriers’ approach to the IoT market is two-pronged, in that they sell connectivity services directly to end-users as well as selling connectivity wholesale to device makers. For example, one customer might buy a bunch of sensors directly from Verizon, while another might buy equipment from a specialist manufacturer that contracts with Verizon to provide connectivity. There are, experts agree, numerous advantages to simply handing off the wireless networking of an IoT project to a major carrier. Licensed networks are largely free of interference – the carriers own the exclusive rights to the RF spectrum being used in a designated area, so no one else is allowed to use it without risking the wrath of the FCC. In contrast, a company using unlicensed technologies like Wi-Fi might be competing for the same spectrum area with half a dozen other organizations. It’s also better-secured than most unlicensed technologies or at least easier to secure, according to former chair of the IEEE’s IoT smart cities working group Shawn Chandler. Buying connectivity services that will have to be managed and secured in-house can be a lot more work than letting one of the carriers take care of it.
The second school of thought is that we might add too much complexity by going all-in native. Although there are advantages, moving to Kubernetes-native systems means having at least two of everything. Enterprises moving to Kubernetes-driven, container-based applications are looking for a common database system, one that spans applications inside and outside Kubernetes. Same with security, raw storage, and other systems that may be native to the cloud, but not Kubernetes. What’s the correct path? One of the lessons I’ve learned over the years is that best-of-breed and fit-to-purpose technology is typically the right way to go. This means native everything and all-in native, but you still need to be smart about picking solutions that will work longer term, native or not. Will there be more complexity? Of course, but this is really the least of your worries, considering the movement to multiclouds and IoT-based applications. Things will get complex out there no matter if you’re using a native Kubernetes solution or not. We might as well get good at complexity, and do things right the first time.
Quote for the day:
"Real leaders are ordinary people with extraordinary determinations." -- John Seaman Garns