MLOps and DevOps engineers require different skill sets. Firstly, developing machine learning models do not need a software engineering background as the focus is mainly on the proof of concept/prototyping. Secondly, MLOps are more experimental in nature compared to DevOps. MLOps calls for tracking different experiments, feature engineering steps, model parameters, metrics, etc. MLOps is not limited to unit testing. Various parameters need to be considered, including data checks, model drift, analysing model performance, etc. Deploying machine learning models is easier said than done as it involves various steps, including data processing, feature engineering, model training, model registry and model deployment. Lastly, MLOps engineers are expected to track data distribution with time to ensure the production environment is consistent with the data it is being trained on. Last year, AI/ML research hit the doldrums in the wake of the pandemic; tech giants like Google slowed down hiring AI researchers and ML engineers, and Uber laid off their AI research and engineering team.
The tool comes preloaded with published attack algorithms that can be used to bootstrap red team operations to evade and steal AI models. Since attacking AI systems also involves elements of traditional exploitation, security professionals can use the target interface and built-in cmd2 scripting engine to hook into Counterfit from existing offensive tools. Additionally, the target interface can allow for granular control over network traffic. We recommend using Counterfit alongside Adversarial ML Threat Matrix, which is an ATT&CK style framework released by MITRE and Microsoft for security analysts to orient to threats against AI systems. ... The tool can help scan AI models using published attack algorithms. Security professionals can use the defaults, set random parameters, or customize them for broad vulnerability coverage of an AI model. Organizations with multiple models in their AI system can use Counterfit’s built-in automation to scan at scale. Optionally, Counterfit enables organizations to scan AI systems with relevant attacks any number of times to create baselines. Running this system regularly, as vulnerabilities are addressed, also helps to measure ongoing progress toward securing AI systems.
The findings are going to obliterate a pile of work done by those who’ve been working hard to fix Spectre, the team says. “Since Spectre was discovered, the world’s most talented computer scientists from industry and academia have worked on software patches and hardware defenses, confident they’ve been able to protect the most vulnerable points in the speculative execution process without slowing down computing speeds too much. They will have to go back to the drawing board,” according to UVA’s writeup. The new lines of attack demolish current defenses because they only protect the processor in a later stage of speculative execution. The team was led by UVA Engineering Assistant Professor of Computer Science Ashish Venkat, who picked apart Intel’s suggested defense against Spectre, which is called LFENCE. That defense tucks sensitive code into a waiting area until the security checks are executed, and only then is the sensitive code allowed to execute, he explained. “But it turns out the walls of this waiting area have ears, which our attack exploits. We show how an attacker can smuggle secrets through the micro-op cache by using it as a covert channel.”
The Drake developers have a philosophy of rigorous test-driven development. The governing equations for multibody physics are well known, but there are often bugs in a complex engine like this. If you scan the codebase, you will find unit tests that contain comparisons with closed-form solutions for nontrivial mechanics problems like a tumbling satellite, countless checks on energy conservation, and many other checks that help the rest of the team focus on manipulation with the confidence that the multibody models are implemented correctly. Importantly, this dynamics engine is not only for simulation. It is also built for optimization and for control. The exact same equations used for simulation can be used to compute forward or inverse kinematics and Jacobians. They can also be used for more complex queries like the gradient of an object’s center of mass. We provide smooth gradients for optimization whenever they are available (even through contact). Drake also supports symbolic computation, which is very useful for structured optimization and for use cases like automatically extracting the famous “lumped parameters” for parameter estimation directly from the physics engine.
Not all ethical imperatives related to digital transformation are as debatable as the suggestion that it should be people-first; some are much more black and white, like the fact that you have to start somewhere to get anywhere. Luckily, “somewhere” doesn’t have to be from scratch. Government, risk and compliance (GRC) standards can be used to create a highly structured framework that’s mostly closed to interpretation and provides a solid foundation for building out and adopting digital solutions. The utility of GRC models applies equally to startup multinationals and offers more than just a playbook; thoughtful application of GRC standards can also help with leadership evaluation, progress reports and risk analysis. Think of it like using bowling bumpers — they won’t guarantee you roll a strike, but they’ll definitely keep the ball out of the gutter. Of course, a given company might not know how to create a GRC-based framework (just like most of us would be at a loss if tasked with building a set of bowling bumpers). This is why many turn to providers like IBM OpenPages, COBIT and ITIL for prefab foundations.
Longitudinal learning is a teaching method that is gaining traction within academia, particularly for corporate training. This continuing education approach involves administering shorter assessments of specific content (such as whether to click on a URL embedded within an email sent by an unknown user) repeatedly over time. Through a consistent assessment process, security concepts and information are reinforced so that knowledge is retained and accumulated gradually. Studies on longitudinal learning in healthcare showed that testing medical students in combination with explaining the information is the most effective way to drive the long-term retention of information. Consistent, repetitive lessons are critical to help employees overcome the cognitive biases that cybercriminals count on to execute their attacks. The human mind is stingy; that is to say, that the brain processes so much information daily that it is constantly trying to take shortcuts to save energy and enable multi-tasking. Cybercriminals know this which is why impersonation attacks, phishing, and rnalicious URLs are so effective. Did you catch the typo in the last sentence? If not, look at the word “malicious” again.
A Microsoft research project, Project Freta, aims to change that, providing tools to identify malware running on virtual machines in the cloud. It takes an economic approach to managing malware, which is only valuable to bad actors as long as it's undetected: once identified on one system, malware code is no longer reusable, as its signature can be added to active scanning tools. But if we're to have any success, we need to be able to scan many thousands of devices, at a push of a button. The very industrial scale of the cloud means that traditional scanning techniques are too slow, looking for one or two compromised images in an ever-growing fleet. It's a reminder of that old Cold War adage: your attackers only have to be lucky once, you have to be lucky every time. Microsoft Research's security specialists have been thinking about this problem, and Project Freta encapsulates much of this thinking in a cloud-centric proof-of-concept. Designed to look for in-memory malware, it provides a portal where you can scan memory snapshots from Linux and Windows virtual machines. Initially focusing on virtual machine instances, it's intended to show the techniques and tools that can be used to scan for malware at massive scale.
“Numerous data labelling firms have sprung up to address this growing need, and many of them are tapping into a global pool of ‘gig workers’ that can get this done effectively. Software and algorithms make it easier to divvy up tasks and have people work at their convenience. India offers a huge talent pool with ready access to smartphones and the ability to tap into a new income source or to supplement their earnings. Time difference, in this case, can even be an asset,” said Girish Muckai, Chief Sales & Marketing Officer of HEAL Software Inc. “Training AI models to deliver high levels of accuracy is critical to success. However, labelling training data sets is tedious work. It’s time consuming, complex and requires significant workforce. The tech industry’s outsourcing boom in India and its large population, make it a growing hotbed of this precision work. Its people and skills position India as a key resource for years to come in an increasingly digital world,” said Lori McKellar, Senior Director, Product Marketing at OpenText. “India has emerged as a huge pool of employable workers to undertake data labelling jobs.
One of the things that I wish I had known earlier in my career is that finding your passion is the most crucial part of the job. Don't misunderstand me -- finding your passion doesn't mean that you'll be doing what you love every day. It's about finding a company, industry, or role that you believe can make a difference. Working in IT is challenging. You'll have hard deadlines to meet, clients to impress, customers to help -- and working nights, weekends and holidays are all an inevitability in most jobs. However, the thing that will push you through it and make it all worthwhile is if you're passionate about the work you do. How can you tell if you're passionate about a company or an industry? You get excited thinking about what the business or industry does. This is so important. If you're not excited about the potential impact of your work, you're not passionate about the industry. This passion will help to drive you through the more monotonous parts of your job. You're helping your customers: So many IT companies are now inventing problems to solve with their products instead of focusing on the issues consumers face. Look for a job that sees you actively helping consumers -- this will give you a sense of accomplishment at the end of the day.
Asynchronous collaboration and project management tools can serve as our panacea, an escape from the virtual spotlight and constant time-suck of video chats and conference calls. These tools offer us a respite by providing a means to collaborate very effectively through cards and boards filled with status updates, comments, files, and even visual workflows that can take the place (and, in some cases, improve upon) our beloved whiteboards. They can effectively take the place of non-productive meetings, allowing us to track our work, collaborate with our teammates, and achieve our objectives without the need for lights and cameras. Here, I’ll share information about some of the asynchronous collaboration and project management tools I find most useful and how they’ve helped us maximize productivity and collaboration. I must begin, though, with a couple of caveats. First, while the tools I’ll reference here are, on the whole, great, there are some drawbacks to asynchronous collaboration that you should be aware of, and I’ll go into detail about them a little further on. Second, these tools cannot and should not be considered permanent replacements for video calls.
Quote for the day:
"Brilliant strategy is the best route to desirable ends with available means." -- Max McKeown