When workers’ homes become their offices, commutes may fall out of the carbon equation, but what’s happening inside those homes must be added in. How much energy is being used to run the air conditioner or heater? Is that energy coming from clean sources? In some parts of the country during lockdown, average home electricity consumption rose more than 20% on weekdays, according to the International Energy Agency. IEA’s analysis suggests workers who use public transport or drive less than four miles each way could actually increase their total emissions by working from home. Looking further ahead, the questions multiply. Many Shopify employees live near the office and walk, bike or take public transit. Will remote work mean they move from city apartments to sprawling suburban homes, which use, on average, three times more energy? Will they buy cars? Will they be electric or gas-powered SUVs? “You have company control over what takes place in the office,” Kauk noted. “When you have everyone working remotely from home, corporate discretion is now employee discretion.”
There are many reasons to learn and design with serverless microservices, but that doesn’t mean they are perfect for every situation – just like microservices in general. If your workloads are stable or predictable in size, you generally won’t receive the financial benefits of running in a serverless environment over the long-term in contrast to unpredictable workloads and serverless platforms scaling in response. Additionally, one downside of serverless and functions-as-a-service is magnified when you have stateful microservices that either require a longer “cold start” time when starting from scratch or something that requires long-term in-memory state management. One final caveat for serverless offerings is the implicit caution against vendor lock-in when using cloud provider-specific serverless offerings, which can lead to deeply integrated architectural decisions that can be impacted severely should the offering change capabilities, requirements, or pricing.
"Cybersecurity is seen as a cost centre to the business -- something you have to do, but only to a minimal degree, like paying the light bill. We need to shift the conversation to aligning our security programs with the business," says Alexander. "Businesses have a tendency to invest in things they see value in. We need to ensure they see the value in our cybersecurity programs -- including people, training and technology," she added. People and training are a key issue here: technology changes fast and the methods cyber criminals use to break into networks are constantly evolving, so it's important for organisations not only to hire the right people, but also to invest in training them so they can continue in their jobs by reacting to the latest threats and dealing with new forms of technology. But that doesn't start with employers: in order to ensure there are enough people to fill cybesecurity jobs going forward, education and training pathways are needed. "At a societal level, we have to do more to educate school age children about cybersecurity and career opportunities," says Jon Oltsik, Senior Principal Analyst and ESG Fellow.
Outbound events are already present as the preferred integration method for most modern platforms. Most cloud services emit events. Many data sources (such as Cockroach changefeeds, MongoDB change streams) and even file systems (for example Ceph notifications) can emit state change events. Custom-built microservices are not an exception here. Emitting state change or domain events is the most natural way for modern microservices to fit uniformly among the event-driven systems they are connected to in order to benefit from the same tooling and practices. Outbound events are bound to become a top-level microservices design construct for many reasons. Designing services with outbound events can help replicate data during an application modernization process. Outbound events are also the enabler for implementing elegant inter-service interactions through the Outbox Patterns and complex business transactions that span multiple services using a non-blocking Saga implementation.
Google engineers and others at vendors like Portworx understood that extensions were needed to enable Kubernetes to do such jobs as manage compute allocations, data security and networking, so the CNI (container network interface) and CSI (container storage interface) were created, leading to “a new avatar for the second coming of Kubernetes,” he says. “Kubernetes was originally – and still is, obviously – being used to manage containers,” Thirumale says. “But with these extensions of CNI, CSI and security extensions, Kubernetes can actually be used to manage data and storage and manage networking and all of that. If I were to put a Kubernetes layer in the middleware layer, looking upwards, it’s managing where the containers land. But looking down, it’s actually now managing infrastructure. There’s a whole new way of managing infrastructure. The traditional way was you had to go to the storage admin and say, ‘Give me five more nodes and give it to me in these terabytes and with this capability and all of that that,’ then they’d provision your EMC box or a Pure box or NetApp box or what have you.”
This year’s study reveals the immense opportunity ahead for tech pros and IT leadership to align and collaborate on priorities and policies to best position not only individual organizations but the industry at large to succeed with a future built for risk preparedness. “Technology professionals today are under even greater pressure to ensure optimized, secure performance for remote workforces while facing limited time and resources for personnel training. When it comes to risk management and mitigation, prioritizing intentional investments in technology solutions that meet business needs is critical,” said Sudhakar Ramakrishna, President and CEO, SolarWinds. “More than ever before, tech pros must partner closely with business leaders to ensure they have the resources and headcount necessary to proactively address security risks. And more importantly, tech pros should constantly assess their risk management, mitigation, and protocols to avoid falling into complacency and being ‘blind’ to risk.”
The combination of reinforcement learning and deep neural networks, known as deep reinforcement learning, has been at the heart of many advances in AI, including DeepMind’s famous AlphaGo and AlphaStar models. In both cases, the AI systems were able to outmatch human world champions at their respective games. But reinforcement learning systems are also notoriously renowned for their lack of flexibility. For example, a reinforcement learning model that can play StarCraft 2 at an expert level won’t be able to play a game with similar mechanics (e.g., Warcraft 3) at any level of competency. Even slight changes to the original game will considerably degrade the AI model’s performance. “These agents are often constrained to play only the games they were trained for – whilst the exact instantiation of the game may vary (e.g. the layout, initial conditions, opponents) the goals the agents must satisfy remain the same between training and testing. Deviation from this can lead to catastrophic failure of the agent,” DeepMind’s researchers write in a paper that provides the full details on their open-ended learning.
The lawsuit stems from users' complaints about the company's data privacy and security practices, including instances in which customers had their video conferences interrupted by "Zoom bombing," in which attackers gained access to meeting passwords or bypassed security features and disrupted the proceedings with profanity and offensive images. During the COVID-19 global pandemic, many organizations have turned to Zoom and other tech firms for video conferencing and collaboration services, which led to an increase in hacking attempts. At one point, the U.S. Justice Department warned that prosecutors could bring federal charges against those who disrupted meetings through Zoom bombing. In April 2020, an analysis by Citizen Lab, a group based at the University of Toronto that studies surveillance and its impact on human rights, found that although Zoom advertised that it used full end-to-end encryption, the company only deployed the inadequate AES-128 encryption standard within its cloud-based videoconferencing platform.
Though the connection between creativity and risk-taking seems intuitive, social scientists have struggled to show a direct link between the two. That’s because measuring creativity itself has proven to be devilishly difficult. “Past studies which aimed to explore the relationship between creativity and risk-taking have equated creativity to measures such as associational fluency, divergent thinking, tolerance of ambiguity, creative lifestyle, or intellectual achievements,” psychologists Vaibhav Tyagi, Yaniv Hanoch, Stephen D. Hall, and Susan L. Denham of the University of Georgia and Mark Runco of Plymouth University in the UK wrote in 2017, in Frontiers in Psychology. But, they added, “each of these measures only provides a narrow insight into some aspects of creativity.” Adopting a different approach, the researchers looked at creativity as a multidimensional trait involving self-described personality and creative achievements, ideation (the process of forming new ideas), association formation, and problem-solving, among other qualities.
The chief concern of using AI in defence and weaponry is that it might not perform as desired, leading to catastrophic results. For example, it might miss its target or launch attacks that are not approved, lead to conflicts. Most countries test their weapons systems reliability before deploying them in the field. But AI weapon systems can be non-deterministic, non-linear, high-dimensional, probabilistic, and continuously learning. For testing a weapon system with such capabilities, traditional testing and validation techniques are insufficient. Furthermore, the race between the world’s superpowers to outpace each other has also made people uneasy as countries might not play by the norms and consider ethics while designing weapons systems, leading to disastrous implications on the battlefield. As defence starts leaning towards technology, it becomes imperative that we evaluate the loopholes of AI-based defence technologies that bad actors might exploit. For example, adversaries might seek to misuse AI systems by messing with training data or figuring out ways to gain illegal access to training data by analysing the specifically tailored test inputs.
Quote for the day:
"True leaders bring out your personal best. They ignite your human potential" -- John Paul Warren