Daily Tech Digest - February 25, 2020

5G's impact: Advanced connectivity, but terrifying security concerns


Despite the enthusiasm, professionals are also concerned about some of the negative aspects of 5G, specifically security and cost. The top barriers to adopting 5G in the next three years included security concerns (35%) and upfront investment (31%), the report found. The relationship between 5G and security is complex. Overall, the majority of respondents (68%) do believe 5G will make their businesses more secure. However, security challenges are also inherent to the network infrastructure, according to the report. These concerns involve user privacy (41%), the number of connected devices (37%), service access (34%), and supply chain integrity (29%). On the connected devices front, some 74% of respondents said they are worried that having more connected devices will bring more avenues for data breaches. With that said, the same percentage of respondents understand that adopting 5G means they will need to redefine security policies and procedures. To prepare for both security and cost challenges associated with 5G, the report recommended users seek external help. The partners businesses will most likely work with include software and services companies (44%), cloud companies (43%), and equipment providers (31%). 


What if 5G fails? A preview of life after faith in technology


"If it gets to a point where it's a broad decoupling of the developed from the emerging economies," said Sec. Lew, "that's not good for anyone. The growth of emerging economies would not be very impressive if they didn't have very active, robust trading relationships with developed economies. And the costs in developed economies would go up considerably, which means that the impact on consumers would be quite dramatic." "We know, from the early days when there was CDMA and GSM," remarked Greg Guice, senior vice president at Washington, DC-based professional consultancy McGuireWoods, "that made it very difficult to sell equipment on a global basis. That not only hurt consumers, but it hurt the pace of technology." He continued: I think what the companies that are building the equipment, and seeking to deploy the equipment, are trying to figure out is, in a world where there may be fragmentation, how do we manage this? I don't see people Balkanizing into their own camps; I think everybody is trying to preserve, as best they can, international harmonization of a 5G platform. Those efforts are in earnest.


Greenpeace takes open-source approach to finish web transformation


“The vision is to help people take action on behalf of the planet,” said Laura Hilliger, a concept architect at Greenpeace who is a leading member of the Planet 4 project. “We want to provide a space that helps people understand how our ecological endeavours are successful, and to show that Greenpeace’s work is successful because of people working collectively.” She met Red Hat representatives after work was already underway on the project in May 2018, which culminated in consultants, technical architects and designers from the company coming in to do a “design sprint” with Greenpeace exactly a year later. This helped Red Hat better understand Planet 4 users and how they interact with the platform, as well as the challenges of integration and effectively visualising data. Hilliger said variations in the tech stacks deployed across Greenpeace’s 27 national and regional offices, on top of its 50-plus websites and platforms, had created a complex data landscape that made integrations difficult.


Evolution of the data fabric


Personally, the fabric concept also began to change my thinking when discussing infrastructure design, for too long it was focussed on technology, infrastructure and location, which would then be delivered to a business upon which they would place their data. However, the issue with this was the infrastructure could then limit how we used our data to solve business challenges. Data fabric changes that focus, building our strategy based on our data and how we need to use it, a focus on information and outcomes, not technology and location. Over time as our data strategies evolved with more focus on data and outcomes, it became clear that a consistent storage layer while a crucial part of a modern data platform design, does not in itself deliver all we need. A little while ago I wrote a series of articles about Building a Modern Data Platform which described how a platform is multi-layered, requiring not just consistent storage but also must be intelligent enough to understand our data as it is written and provide insight, apply security and do these things immediately across our enterprise.


Legal Tech May Face Explainability Hurdles Under New EU AI Proposals


Horrigan noted the transparency language in the European Commission’s proposal is similar to the transparency principles outlined in the EU’s General Data Protection Regulation (GDPR). While the European Commission is still drafting its AI regulations, legal tech companies have fallen under the scope of the GDPR since mid-2018. Legal tech companies have also fielded questions regarding predictive coding’s accuracy and transparency with technology-assisted review (TAR), Horrigan added. TAR has become increasingly accepted by courts after then-U.S. Magistrate Judge Andrew Peck of the Southern District of New York granted the first approval of TAR in 2012. In Peck’s order, he discussed predictive coding’s transparency that provides clarity regarding AI-powered software’s “black box.” “We’ve addressed the black box before with technology-assisted review and we will do it again with other forms of artificial intelligence. The black box issue can be overcome,” Horrigan said. However, Hudek disagreed. While Hudek said the proposed regulation doesn’t make him hesitant to develop new AI-powered features to his platform, it does make it more challenging.


Thinking About ‘Ethics’ in the Ethics of AI

Thinking_about_Ethics_in_the_Ethics_of_AI_Judith_Simon
Ethics by Design is “the technical/algorithmic integration of reasoning capabilities as part of the behavior of [autonomous AI]”. This line of research is also known as ‘machine ethics’. The aspiration of machine ethics is to build artificial moral agents, which are artificial agents with ethical capacities and thus can make ethical decisions without human intervention. Machine ethics thus answers the value alignment problem by building autonomous AI that by itself aligns with human values. To illustrate this perspective with the examples of AVs and hiring algorithms: researchers and developers would strive to create AVs that can reason about the ethically right decision and act accordingly in scenarios of unavoidable harm. Similarly, the hiring algorithms are supposed to make non-discriminatory decision without human intervention. Wendell Wallach and Colin Allen classified three types of approaches to machine ethics in their seminal book Moral machines.


Cisco goes to the cloud with broad enterprise security service

cloud security expert casb binary cloud computing cloud security by metamorworks getty
Cisco describes the new SecureX service as offering an open, cloud-native system that will let customers detect and remediate threats across Cisco and third-party products from a single interface. IT security teams can then automate and orchestrate security management across enterprise cloud, network and applications and end points. “Until now, security has largely been piecemeal with companies introducing new point products into their environments to address every new threat category that arises,” wrote Gee Rittenhouse senior vice president and general manager of Cisco’s Security Business Group in a blog about SecureX. “As a result, security teams that are already stretched thin have found themselves managing massive security infrastructures and pivoting between dozens of products that don’t work together and generate thousands of often conflicting alerts. In the absence of automation and staff, half of all legitimate alerts are not remediated.” Cisco pointed to its own 2020 CISO Benchmark Report, also released this week, as more evidence of the need for better, more tightly integrated security systems.


Evolution of Infrastructure as a Service


Some would say that IaaS, SaaS, and PaaS are part of a family tree. SaaS is one of the more widely known as-a-service models where cloud vendors host the business applications and then deliver to customers online. It enables customers to take advantage of the service without maintaining the infrastructure required to run software on-premises. In the SaaS model, customers pay for a specific number of licenses and the vendor manages the behind-the-scenes work. The PaaS model is more focused on application developers and providing them with a space to develop, run, and manage applications. PaaS models do not require developers to build additional networks, servers or storage as a starting point to developing their applications. ... IaaS is now enabling more disruption across all markets and industries as the same capabilities available to larger companies are now also available to the smallest startup in a garage. This includes advances in AI and Machine Learning (as a service), data analytics, serverless technologies, IoT and much more. This is also requiring large companies to behave as agile as a startup.


AI Regulation: Has the Time Arrived?


Karen Silverman, a partner at international business law firm Latham & Watkins noted that regulation risks include stifling beneficial innovation, the selection of business winners and losers without any basis, and making it more difficult for start-ups to achieve success. She added that ineffective, erratic, and uneven regulatory efforts or enforcement may also lead to unintended ethics issues. "There's some work [being done] on transparency and disclosure standards, but even that is complicated, and ... to get beyond broad principles, needs to be done on some more industry- or use-case specific basis," she said. "It’s probably easiest to start with regulations that take existing principles and read them onto new technologies, but this will leave the challenge of regulating the novel aspects of the tech, too." On the other hand, a well-designed regulatory scheme that zeros-in on bad actors and doesn't overregulate the technology would likely mark a positive change for AI and its supporters, Perry said.


Functional UI - a Model-Based Approach


User interfaces are reactive systems which are specified by the relation between the events received by the user interface application and the actions the application must undertake on the interfaced systems. Functional UI is a set of implementation techniques for user interface applications which emphasizes clear boundaries between the effectful and purely functional parts of an application. User interfaces' behavior can be modelized by state machines, that, on receiving events, transition between the different behavior modes of the interface. A state machine model can be visualized intuitively and economically in a way that is appealing to diverse constituencies (product owner, testers, developers), and surfaces design bugs earlier in the development process. Having a model of the user interface allows to auto-generate both the implementation and the tests for the user interface, leading to more resilient and reliable software. Property-based testing and metamorphic testing leverage the auto-generated test sequences to find bugs without having to define the complete and exact response of the user interface to a test sequence. Such testing techniques have found 100+ new bugs in two popular C compilers (GCC and LLVM)




Quote for the day:


"There is no 'one' way to be a perfect leader, but there are a million ways to be a good one." -- Mark W. Boyer


No comments:

Post a Comment