AI systems are the product of many different decisions made by those who develop and deploy them. From system purpose to how people interact with AI systems, we need to proactively guide these decisions toward more beneficial and equitable outcomes. That means keeping people and their goals at the center of system design decisions and respecting enduring values like fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. The Responsible AI Standard sets out our best thinking on how we will build AI systems to uphold these values and earn society’s trust. It provides specific, actionable guidance for our teams that goes beyond the high-level principles that have dominated the AI landscape to date. The Standard details concrete goals or outcomes that teams developing AI systems must strive to secure. These goals help break down a broad principle like ‘accountability’ into its key enablers, such as impact assessments, data governance, and human oversight. Each goal is then composed of a set of requirements, which are steps that teams must take to ensure that AI systems meet the goals throughout the system lifecycle
The key is to preplan your sacrifices rather than sacrifice parts of your life by default. Look at your normal schedule and think about where you could find the extra time and energy for your business, without sacrificing the things you value most in life. Maybe you decide to stay up later after the kids are in bed to get work done. Maybe you stop binge-watching on Hulu so you could get to the gym. Maybe you give up that second round of golf each week to spend more time with your spouse. Maybe you leave the office for a couple of hours to catch your kid's soccer game and come back later. Maybe you sacrifice some money to get extra help in for the business. Maybe you stop micro-managing everything in your business and actually delegate more responsibility to others. We all have areas in where we spend our time that we can tweak. You just have to decide what's right for you. You'll always have to sacrifice something to build a business or accomplish anything extraordinary in life. But giving up what you value most is not a good trade-off. Make sure you're making smart sacrifices by giving up what doesn't matter for things that do.
The decision is part of a larger overhaul of Microsoft’s AI ethics policies. The company’s updated Responsible AI Standards (first outlined in 2019) emphasize accountability to find out who uses its services and greater human oversight into where these tools are applied. In practical terms, this means Microsoft will limit access to some features of its facial recognition services (known as Azure Face) and remove others entirely. Users will have to apply to use Azure Face for facial identification, for example, telling Microsoft exactly how and where they’ll be deploying its systems. Some use cases with less harmful potential (like automatically blurring faces in images and videos) will remain open-access. ... " “Experts inside and outside the company have highlighted the lack of scientific consensus on the definition of ‘emotions,’ the challenges in how inferences generalize across use cases, regions, and demographics, and the heightened privacy concerns around this type of capability,” wrote Microsoft’s chief responsible AI officer
OpenAI also has something for that. They have OpenAI CLIP, which stands for Contrastive Language-Image Pre-training. What this model does is that it brings together text and image embeddings. It generates an embedding for each text and it generates an embedding for each image, and these inputs are aligned to each other. The way this model was trained is that, for example, you have a set of images, like an image of a cute puppy. Then you have a set of text like, Pepper the Aussie Pup. The way it's trained is that hopefully the distance between the embedding of this picture of this puppy, and the embedding of the text, Pepper the Aussie Pup, that that is really close to each other. It's trained on 400 million image text pairs, which were scraped from the internet. You can imagine that someone did indeed put an image of a puppy on the internet, and didn't write under it, "This is Pepper the Aussie Pup."
Quantum computers will likely offer exponential improvements over classical systems for certain problems, but to realize their potential, researchers first need to scale up the number of qubits and to improve quantum error correction. What’s more, the exponential speed-up over classical algorithms promised by quantum computers relies on a big, unproven assumption about so-called “complexity classes” of problems — namely, that the class of problems that can be solved on a quantum computer is larger than those that can be solved on a classical computer.. It seems like a reasonable assumption, and yet, no one has proven it. Until it's proven, every claim of quantum advantage will come with an asterisk: that it can do better than any known classical algorithm. Quantum sensors, on the other hand, are already being used for some high-precision measurements and offer modest (and proven) advantages over classical sensors. Some quantum sensors work by exploiting quantum correlations between particles to extract more information about a system than it otherwise could have.
The cloud can’t scale proportionately to handle all the data that comes from IoT devices, and transporting data from the IoT devices to the cloud is bandwidth-limited. No matter the size and sophistication of the communications network, the sheer volume of data collected by IoT devices leads to latency and congestion. Several IoT applications rely on rapid, real-time decision-making such as autonomous cars. To be effective and safe, autonomous cars need to process data and make instantaneous decisions (just like a human being). They can’t be limited by latency, unreliable connectivity, and low bandwidth. Autonomous cars are far from the only IoT applications that rely on this rapid decision making. Manufacturing already incorporates IoT devices, and delays or latency could impact the processes or limit capabilities in the event of an emergency. In security, biometrics are often used to restrict or allow access to specific areas. Without rapid data processing, there could be delays that impact speed and performance, not to mention the risks in emergent situations.
The landmark discovery, published in Nature today, was nine years in the making. "This is the most exciting discovery of my career," senior author and quantum physicist Michelle Simmons, founder of Silicon Quantum Computing and director of the Center of Excellence for Quantum Computation and Communication Technology at UNSW told ScienceAlert. Not only did Simmons and her team create what's essentially a functional quantum processor, they also successfully tested it by modeling a small molecule in which each atom has multiple quantum states – something a traditional computer would struggle to achieve. This suggests we're now a step closer to finally using quantum processing power to understand more about the world around us, even at the tiniest scale. "In the 1950s, Richard Feynman said we're never going to understand how the world works – how nature works – unless we can actually start to make it at the same scale," Simmons told ScienceAlert. "If we can start to understand materials at that level, we can design things that have never been made before.
With tier-1 support, you have someone watching the stuff that is running. Their setup alerts them to the fact that something bad happened. They're gonna turn into a tier-2 person and say, “Hey, can you check this out and see if it really is something bad?” And so the tier-2 person takes a look. Maybe they'll take a look at that laptop or that part of the network or a server. If it wasn't a false alert, and it looks like bad behavior, then it goes to tier 3. Typically, the person running that is much more detailed and technical. They'll do a forensic analysis. And they look at all of the bits that are moving: the communication and what happened. They know adversary tactics, techniques, and procedures (TTP). They’re really good at tracking the adversary in the environment. When you're looking for a third-party incident response, and support agreement, you have to know what you, as a company, have the skills to do. Then you contract out for tier 2 or tier 3. They're going to come in and provide support. Service level agreements are critical. What are you expecting? The more you want, the more you're going to pay.
“Prioritize yourself. It is not selfish; it’s an act of self-care. Set aside an ‘hour of power’ every day, first thing in the morning. During this hour, go analog and keep all digital distractions away. Protect that time fiercely and find an activity that nourishes your mind. For instance, learn something new and exciting, read some non-fiction that is energizing and inspiring, journal, or meditate. Find what works for you and do it every day. “Get moving. A healthy mind needs a healthy body. Do something, anything, to get some physical activity into your day. If dancing to disco is your thing, turn up the volume and go for it. Posting it on TikTok is optional, and maybe not advisable. “Stay connected. You are not alone – no matter what you’re going through, someone else has experienced it. Showing vulnerability is not a weakness, it is a strength. Build and nurture a close group of trusted advisors, preferably outside your company. Build relationships before you need them. Don’t be afraid to ask for help. They can help you work through challenges and provide an avenue to help others on this journey.”
Zscaler Posture Control wants to make it easier for developers to take a hands-on approach to keeping their companies safe and incorporate best security practices during the development stage, according to Chaudhry. He says Zscaler hopes that 10% of its more than 5,600 customers will be using the company's entire cloud workflow protection offering within the next year. "Doing patch management after the application is built is extremely hard," Chaudhry says. "It was important for us to make sure that the developers are taking a more active role in their part of the security implementation." Zscaler wants to learn from the 210 billion transactions it processes daily to better remediate risk on an ongoing basis, addressing everything from unpatched vulnerabilities and overprivileged entitlements to Amazon S3 buckets that have erroneously been left open, Chaudhry says. Zscaler will put data points from these transactions into its artificial intelligence model to better protect customers going forward.
Quote for the day:
"Leadership is the creation of an environment in which others are able to self-actualize in the process of completing the job." -- John Mellecker