Daily Tech Digest - November 01, 2020

Why is Site Reliability Engineering Important?

“The term SRE surely has been introduced by Google, but directly or indirectly several companies have been doing stuff related to SRE for a long time, though I must say that Google gave it a new direction after coining the term ‘SRE.’ I have a clear view on SRE as I believe it walks hand-in-hand with DevOps. All your infrastructure, operations, monitoring, performance, scalability and reliability factors are accounted for in a nice, lean and automated system (preferably); however this is not enough. Culture is an important aspect driving the SRE aspects, along with business needs. As the norm ‘to each, his own’ goes, SRE is no different. It is easy to get inspired from pioneer companies, but it’s impossible to copy their culture and means to replicate the success, especially with your ‘anti-patterns’ and ‘traditional’ remedial baggage. Do you have similar infrastructure and business needs as the company showcasing brilliant success with SRE? No. Can it help you? Absolutely. The key factor here is to recognize what is important to your success blueprint after understanding the fundamentals of it and find your own success factors considering your cultural needs. Your strategy and culture need to walk together, just like your guiding (strategy) and driving (culture) factors.”


AI in Healthcare — Is the Future of Healthcare already here?

Through a series of neural networks, AI is helping healthcare providers achieve this balance. Facial recognition software is combined with machine learning to detect patterns in facial expressions that point us towards the possibility of a rare disease. Moon developed by Diploid enables early diagnosis of rare diseases through the software, allowing doctors to begin early treatment. Artificial Intelligence in Healthcare carries special significance in detecting rare diseases earlier than they usually could be. ... Health monitoring is already a widespread application of AI in Healthcare. Wearable health trackers such as those offered by Apple, Fitbit, and Garmin monitor activity and heart rates. These wearables are then in a position to send all of the data forward to an AI system, bringing in more insights and information about the ideal activity requirement of a person. These systems can detect workout patterns and send alerts when someone misses out their workout routine. The needs and habits of a patient can be recorded and made available to them when need be, improving the overall healthcare experience. For instance, if a patient needs to avoid heavy cardiac workout, they can be notified of the same when high levels of activity are detected.


Why kids need special protection from AI’s influence

Algorithms can change the course of children’s lives. Kids are interacting with Alexas that can record their voice data and influence their speech and social development. They’re binging videos on TikTok and YouTube pushed to them by recommendation systems that end up shaping their worldviews. Algorithms are also increasingly used to determine what their education is like, whether they’ll receive health care, and even whether their parents are deemed fit to care for them. Sometimes this can have devastating effects: this past summer, for example, thousands of students lost their university admissions after algorithms—used in lieu of pandemic-canceled standardized tests—inaccurately predicted their academic performance. Children, in other words, are often at the forefront when it comes to using and being used by AI, and that can leave them in a position to get hurt. “Because they are developing intellectually and emotionally and physically, they are very shapeable,” says Steve Vosloo, a policy specialist for digital connectivity at Unicef, the United Nations Children Fund. Vosloo led the drafting of a new set of guidelines from Unicef designed to help governments and companies develop AI policies that consider children’s needs. 


A new threat matrix outlines attacks against machine learning systems

Mikel Rodriguez, a machine learning researcher at MITRE who also oversees MITRE’s Decision Science research programs, says that we’re now at the same stage with AI as we were with the internet in the late 1980s, when people were just trying to make the internet work and when they weren’t thinking about building in security. We can learn from that mistake, though, and that’s one of the reasons the Adversarial ML Threat Matrix has been created. “With this threat matrix, security analysts will be able to work with threat models that are grounded in real-world incidents that emulate adversary behavior with machine learning,” he noted. Also, the matrix will help them think holistically and spur better communication and collaboration across organizations by giving a common language or taxonomy of the different vulnerabilities, he says. “Unlike traditional cybersecurity vulnerabilities that are tied to specific software and hardware systems, adversarial ML vulnerabilities are enabled by inherent limitations underlying ML algorithms. Data can be weaponized in new ways which requires an extension of how we model cyber adversary behavior, to reflect emerging threat vectors and the rapidly evolving adversarial machine learning attack lifecycle,” MITRE noted.


Understanding the modular monolith and its ideal use cases

Conventional monolithic architectures focus on layering code horizontally across functional boundaries and dependencies, which inhibits their ability to separate into functional components. The modular monolith revisits this structure and configures it to combine the simplicity of single process communication with the freedom of componentization. Unlike the traditional monolith, modular monoliths attempt to establish bounded context by segmenting code into individual feature modules. Each module exposes a programming interface definition to other modules. The altered definition can trigger its dependencies to change in turn. Much of this rests on stable interface definitions. However, by limiting dependencies and isolating data store, the architecture establishes boundaries within the monolith that resemble the high cohesion and low coupling found in a microservices architecture. Development teams can start to parse functionality, but can do so without worrying about the management baggage tied to multiple runtimes and asynchronous communication. One benefit of the modular monolith is that the logic encapsulation enables high reusability, while data remains consistent and communication patterns simple.


What's Wrong With Big Objects In Java?

There are several ways to fix or at least mitigate this problem: tune the GC, change the GC type, fix the root cause, or upgrade to the newer JDK. Tuning GC, in this case, means increasing the heap or increasing the region size with -XX:G1HeapRegionSize so that previously Humongous objects are no longer Humongous and follow the regular allocation path. However, the latter will decrease the number of regions, that may negatively affect GC performance. It also means coupling GC options with the current workload (which may change in the future and break your current assumptions). However, in some situations, that's the only way to proceed. A more fundamental way to address this problem is to switch to the older Concurrent Mark-Sweep (CMS) garbage collector, via the -XX:+UseParNewGC -XX:+UseConcMarkSweepGC flags (unless you use one of the most recent JDK versions in which this collector is deprecated). CMS doesn't divide the heap into numerous small regions and thus doesn't have a problem handling several-MB objects. In fact, in relatively old Java versions CMS may perform even better overall than G1, at least if most of the objects that the application creates fall into two categories: very short-lived and very long-lived.


Analysis: Tactics of Group Waging Attacks on Hospitals

UNC1878 has recently changed some of its tactics. For example, it no longer uses Sendgrid to deliver the phishing emails and to supply the URLs that lead to the malicious Google documents, Mandiant reports. "Recent campaigns have been delivered via attacker-controlled or compromised email infrastructure and have commonly contained in-line links to attacker-created Google documents, although they have also used links associated with the Constant Contact service," according to the Mandiant report. Hosting the malicious documents on a legitimate service is also a new twist. Earlier campaigns were hosted on a compromised infrastructure, Mandiant researchers say. Once the group delivers a loader via a malicious document, it downloads the Powertrick backdoor and/or Cobalt Strike Beacon payloads to establish a presence and to communicate with the command-and-control server, the report says. Mandiant notes that the group uses Powertrick infrequently, perhaps for establishing a foothold and performing initial network and host reconnaissance. ... The group maintains persistence by creating a scheduled task, adding itself to the startup folder as a shortcut, creating a scheduled Microsoft BITS job using /setnotifycmdline and in some cases using stolen login credentials, the report says.


The secret to designing a positive future with AI? Imagination

Focusing on the positive is key to steering toward a positive destination. Instead of being passive passengers in a collective spaceship erring towards dangerous planets, we can instead actively move in the direction of the outcomes we want, such as full employment and equity. This is, at its heart, an exercise in vision. To be sure, realizing that vision will require a commitment to idealism, hope, and an openness towards change and uncertainty. But the vision is paramount and will set our future course. ... Building such a vision is a collective intelligence exercise that requires many voices from around the world. In taking this step, we can empower participants from various backgrounds and countries to make this vision real and identify the implications of that long-term vision for present-day policy decisions Such work can seem like a creative writing prompt but was actually a key exercise undertaken by the World Economic Forum’s Global AI Council (GAIC), a multi-stakeholder body that includes leaders from the public and private sectors, civil society and academia. In April 2020, we began pursuing an ambitious initiative called Positive AI Economic Futures, taking as its starting point the hypothesis that AI systems will eventually be able to do the great majority of what we currently call work, including all forms of routine physical and mental labour.


How Kubernetes extends to machine learning (ML)

The scalability of Kubernetes, alongside the flexibility of ML, can allow developers within the open source space to innovate without experiencing strain on their workloads. Thomas Di Giacomo, president of engineering and innovation at SUSE, explained: “Kubernetes and cloud native technologies enable a broad selection of applications because they serve as a reliable connecting mechanism for a multitude of open source innovations, ranging from supporting various types of infrastructures and adding AI and ML capabilities to help make developers’ lives simpler and business applications more streamlined. “Kubernetes facilitates fast, simple management and clear organisation of containerised services and applications. The technology also enables the automation of operational tasks, like, application availability management and scaling. “There’s no denying that AI and ML technologies will have a massive impact on the open source market. Developed by the community, AI open source projects will help to develop and train ML models, and will provide a powerful feedback loop that will enable faster innovation. “We have already witnessed that at SUSE, and having been working and developing AI ML solutions together with Kubernetes to streamline their use by data scientists who can then focus on their own needs and processes rather than the mechanics.”


REPORT: Consumer Privacy Concerns Demand Regulatory Compliance

Data privacy is gaining more attention from consumers in multiple markets, including the European Union and the United States. A study of U.S. consumers found that 87 percent feel data privacy should be considered a human right, for example. Many respondents are also wary of what businesses are doing with their information, with roughly 70 percent of consumers stating that they do not trust companies to sell their data ethically. Such trends are leading businesses and regulators to reconsider the country’s existing data privacy and security standards. Data security is also being debated in the EU. The region’s General Data Protection Regulation (GDPR) that governs online data collection and storage has been in place for approximately two years, but large companies are more frequently coming under regulatory scrutiny as consumers become more familiar with the rule. Google-owned video streaming service YouTube, for example, is currently facing a lawsuit over whether its data practices violate GDPR. A suit alleges that the platform fails to comply with GDPR because it collects data from minors, who cannot legally consent to sharing their digital information under the regulation GDPR.



Quote for the day:

'Risks are the seeds from which successes grow." -- Gordon Tredgold

No comments:

Post a Comment