Removing bias from AI is not easy because there’s no one cause for it. It can enter the machine-learning cycle at various points. But the logical and most promising starting point seems to be the data that goes into it, says Ebert. AI systems rely on deep neural networks that parse large training data sets to identify patterns. These deep-learning methods are roughly based on the brain’s structure, with many layers of code linked together like neurons, and weights given to the links changing as the network picks up patterns. The problem is, training data sets may lack enough data from minority groups, reflect historical inequities such as lower salaries for women, or inject societal bias, as in the case of Asian-Americans being labeled foreigners. Models that learn from biased training data will propagate the same biases. But collecting high-quality, inclusive, and balanced data is expensive. So Mostly AI is using AI to create synthetic data sets to train AI. Simply removing sensitive features like race or changing them—say, increasing female salaries to affect approved credit limits—does not work because it interferes with other correlations.
System-as-cause thinking - a pattern of thinking that determines what to include within the boundaries of our system (I.e., extensive boundary) and the level of granularity of what is to be included (I.e., intensive boundary). The extensive and intensive boundaries depend on the context in which we are analyzing the system and what is under the control of the decision maker vs what is outside their control. Data scientists typically work with whatever data that has been provided to them. While it is a good starting point, we also need to understand the broader context around how a model will be used and what is it that the decision maker can control or influence. For example, when building a robo-advice tool we could include a number of different aspects ranging from macro-economic indicators, asset class performance, company investment strategies, individual risk appetite, life-stage of the individual, health condition of the investor etc. The breadth and depth of factors to be included depends on whether we are building a tool for an individual consumer, an advisor, a wealth management client, or even a policy maker in the government.
“The malware we uncovered using this technique is an updated version of Shlayer, a family of malware that was first discovered in 2018. Shlayer is known to be one of the most abundant pieces of malware on macOS so we’ve developed a variety of detections for its many variants, and we closely track its evolution,” Bradley told TechCrunch. “One of our detections alerted us to this new variant, and upon closer inspection we discovered its use of this bypass to allow it to be installed without an end user prompt. Further analysis leads us to believe that the developers of the malware discovered the zero-day and adjusted their malware to use it, in early 2021.” Shlayer is an adware that intercepts encrypted web traffic — including HTTPS-enabled sites — and injects its own ads, making fraudulent ad money for the operators. “It’s often installed by tricking users into downloading fake application installers or updaters,” said Bradley. “The version of Shlayer that uses this technique does so to evade built-in malware scanning, and to launch without additional ‘Are you sure’ prompts to the user,” he said.
First and foremost, basic learning dictates that you can’t use data to drive every action until you give every decision maker access to data and the tools to act on it. In essence, you have to approach data strategically — in a way that makes it available across departments and business users. This will amp up the data literacy and embed fact-based decision making into organizational culture. Secondly, I’ve also been known to have TV screens installed that show the latest dashboards to encourage executive buy-in. Now, executives stand in the hallway to consult them daily; it publicizes how leadership makes decisions and sets an example for the entire enterprise. ... Hard benefits such as new revenue streams, improved operations, improved customer engagement, cost reduction and risk avoidance can be quantified. This can be achieved by using financial models that incorporate cost, benefits and risks to get to the ROI, net present value (NPV), and payback period. However, the causal link between D&A and soft benefits— like productivity gains or continued innovation across the organization from culture change — remains elusive for me.
Getting a microservice-based architecture to work around asynchronous events can give you a lot of flexibility and performance improvements. It’s not easy, since the communication can get a little bit tricky and debugging problems around it even more so. Since now there is no clear data flow from Service 1 to Service 2. A great solution for that is to have an event ID created when the client sends its initial request, and then propagated to every event that stems from it. That way you can filter around the logs using that ID and understand every message generated from the original request. Also, note that my diagram above shows the client directly interacting with the message queue. This can be a great solution if you provide a simple interface or if your client is internal and managed by your dev team. However, if this is a public client that anybody can code, you might want to provide them with an SDK-like library they can use to communicate with you. Abstracting and simplifying the communication will help you secure the workflow and provide a much better developer experience for whoever is trying to use them.
A driving robot could be more readily shared around and used on a widespread basis. A true self-driving car with built-in capabilities is merely one car. A driving robot could drive any conventional car. As such, the driving robot has greater utility, plus the cost of the driving robot can be spread amongst a multitude of users or owners in a more versatile way than could a singular self-driving car. A driving robot might provide additional uses. A true self-driving car has just one purpose, ostensibly it is a car that drives and that is all that it does (though, notably, this is a darned impressive act!). A driving robot might be able to perform other tasks, such as being able to get out of the car and carry a delivery package to the door of a house. Note that this is not a requirement per se and merely identifiable as potential added use that might be devised. There are also various disadvantages of using a driving robot versus aiming to utilize or craft a true self-driving car, which I won’t delineate those shortcomings here. I urge you to take a look at my earlier article on the topic to see the articulated list of downsides or drawbacks.
Given a set of heterogeneous machines and a set of heterogeneous production jobs, compute the processing schedule that minimizes specified metrics. Heterogeneous means that both, the machines and the jobs can have different properties, e.g. different throughput for the machines and different required processing time for the jobs, and many more in practice. Additionally, the real problem is complicated by a set of imposed constraints, e.g. jobs of class A cannot be processed on machines of class B, etc. Theoretically, this problem is a complicated instance of the “Job scheduling” problem which together with the “Capacitated vehicle routing” is considered to be a classic of Combinatorial Optimization (CO). Though this problem is NP-hard (no exact solution in polynomial time), it is rather well-studied by the CO community, which offers a handful of methods to solve its theoretical (simplified) version. However, the majority of the methods cannot cope with real-world problem sizes or additional constraints that I’ve mentioned above. That is why most of the time people in the industry resort to some form of stochastic search combined with domain-specific heuristics.
The new remote working environments that have been ushered in as a result of the pandemic has expanded the attack surface, meaning a need for added visibility over the network for the CISO. According to Adam Palmer, chief cyber security strategist at Tenable, clear communication with the organisation’s board about possible risks can go a long way in empowering security leadership. “CISOs will need to be aware, and effectively list the vulnerabilities before they inform the board of directors of what is being done and how to reduce and address them,” said Palmer. “By using a risk-based approach CISOs can profile the distributed risk across the extended enterprise, and explain this in the boardroom in the same business terms other functions use so all can understand and evaluate any controls that need to be implemented to address that risk effectively and cost-efficiently. “It will be tempting for management to purchase additional tools to alleviate the overall risk levels, and it is important to remember that a magic bullet is not the only solution.
CFOs are no longer bean counters with a fierce grip on the checkbook. Being a CFO today is about leadership -- understanding the growth levers that drive the business and the investments needed to get there. Right now, that growth driver is digital transformation. CFOs now must have a strong understanding of technology. CFOs are data-driven and are using predictive analytics and machine learning to ensure initiatives are driving real impact. CFOs should ask for data that indicates transformation efforts are maximizing ROI and driving tangible value across the business. They're looking to quantify the success of their digital transformation investments. ... CFOs are central to strategic decisions about transformation. They are focused on helping their companies not only survive the current climate but also come out stronger on the other side. While it can be hard for organizations to overhaul in the midst of uncertainty, it's the CFO's job to really advocate for and invest in projects that will push the business forward. CFOs can ensure investments impact every aspect of the business and drive more engagement and commitment from business leaders, ultimately ensuring better success.
Should the attacker’s email manage to evade your mail gateway, the goal is to trick an employee into performing an action that executes a malicious payload. This payload is designed to exploit a vulnerability and provide the attacker with access to the environment. Ideally, you’ve got code execution policies in place so only certain types of files can be executed. You can prevent anything that’s delivered by email to be executed, to restrict things as much as you possibly can. The attacker knows this and is constantly trying to work around it, which is why you need to maintain an ability to detect the execution of malicious payloads from phishing emails on employee endpoints. But how? Design and frequently run test cases that simulate malicious payloads being executed on your employee endpoints. Monitor logs and alerts when performing code execution test cases to validate that you have both the necessary coverage and telemetry to recognize indicators of compromise. Where blind spots in telemetry are identified, develop and validate new detection use cases.
Quote for the day:
"Coaching is unlocking a person's potential to maximize their own performance. It is helping them to learn rather than teaching them." -- John Whitmore