Two of the biggest benefits of serverless computing should be clear: developers can focus on the business goals of the code they write, rather than on infrastructural questions; and organizations only pay for the compute resources they actually use in a very granular fashion, rather than buying physical hardware or renting cloud instances that mostly sit idle. As Bernard Golden points out, that latter point is of particular benefit to event-driven applications. For instance, you might have an application that is idle much of the time but under certain conditions must handle many event requests at once. Or you might have an application that processes data sent from IoT devices with limited or intermittent Internet connectivity. In both cases, the traditional approach would require provisioning a beefy server that could handle peak work capacities—but that server would be underused most of the time. With a serverless architecture, you’d only pay for the server resources you actually use. Serverless computing would also be good for specific kinds of batch processing.
The TCP/IP protocol is the foundation of the internet and pretty much every single network out there. The protocol was designed 45 years ago and was originally only created for connectivity. There’s nothing in the protocol for security, mobility, or trusted authentication. The fundamental problem with TCP/IP is that the IP address within the protocol represents both the device location and the device identity on a network. This dual functionality of the address lacks the basic mechanisms for security and mobility of devices on a network. This is one of the reasons networks are so complicated today. To connect to things on a network or over the internet, you need VPNs, firewalls, routers, cell modems, etc. and you have all the configurations that come with ACLs, VLANs, certificates, and so on. The nightmare grows exponentially when you factor in internet of things (IoT) device connectivity and security. It’s all unsustainable at scale. Clearly, we need a more efficient and effective way to take on network connectivity, mobility, and security.
It’s reasonable to expect the nature of 5G services to change rapidly. A software-defined network infrastructure is flexible enough for CommSPs to speed the roll out of customizable applications and service models, ease customer provisioning and improve network operation and management efficiency. Staying in lock-step with evolving 3GPP specifications and 5G services implementation will require flexibility that only software-based infrastructure can provide. For example, one major US CommSP shared early plans for tiered 5G pricing based on data speeds, similar to broadband Internet pricing plans. The ability to support custom charging and new mobility service scenarios will be key to establishing, testing and evolving pricing structures and business models. ... Network engineers and architects can deploy these servers in the core or network edge, which makes it possible to scale to multi-terabit configurations in the core network and share consistent infrastructure and software with distributed locations. CommSPs can use this network infrastructure to apply a cloud native architecture that drive efficiencies, speed deployments and meet SLA requirements that have very different requirement from the cloud computing industry.
The evasion and anti-analysis capabilities built into modern malware tools like AndroMut highlight the need for multilayered protections. In addition to securing emails and endpoint devices, organizations need to monitor for malware communication with command-and-control systems, Dawson notes. For enterprises, the threat posed by TA505 appears to be growing, according to Proofpoint. The group is behind some of the largest email campaigns ever, including one to distribute the Locky ransomware. Through 2017 and the first half of 2018, TA505 launched such massive campaigns that they dramatically affected global malicious email volumes, Dawson says. "The group saturated organizations with Locky ransomware and the Dridex banking Trojan," he notes. When TA505 shifted to smaller — though still relatively large — campaigns distributing RATs and other malware, it triggered a similar shift in this direction among other attackers that continues today, Dawson says.
The impetus, as usual, is law enforcement and intelligence agencies' concern over "going dark." In other words, suspects in an investigation - centering on child abuse, terrorism, drug trafficking or any other type of criminality - might be using communications techniques on which investigators cannot easily eavesdrop. The NSC advises the president on national security matters and coordinates policies across government departments. Last week's gathering of the NSC's Deputies Committee, three unnamed people with knowledge of the meeting told Politico, does not appear to result in any decision to change current policies. "The two paths were to either put out a statement or a general position on encryption and [say] that they would continue to work on a solution, or to ask Congress for legislation," one of the people told Politico. One of the chief proponents for anti-crypto legislation was Deputy Attorney General Rod Rosenstein, but with his departure, the appetite for legislation meant to tackle the "going dark" problem has appeared to wane, Politico reports.
Use of the cloud within a disaster recovery plan offers many benefits, including reliability and cost efficiency, as there is no need to invest in infrastructure that may never be used. Cloud resources can be offsite, mitigating the risk of a disaster affecting the main office location, and can be accessed (and paid for) only as needed. A multi-cloud disaster recovery strategy offers additional peace of mind that critical systems and data will remain easily accessible when needed. Although hybrid and multi-cloud deployments are widely acknowledged as good practice, IT professionals highlight complexity, training gaps and lack of internal resources in their hesitancy to deploy using multiple clouds. Nevertheless, more than half the respondents were operating in a multi-cloud environment, with nearly one in ten using five or more clouds within their organizations. “What we’re hearing from customers, and is consistent with our survey findings, is that they’re looking for ways to simplify and streamline their cloud deployment and management,” said Ziad Lammam, vice president of product management for Teradici.
According to world-renowned inventor and futurist Ray Kurzweil, looking at AI as a threat is unnecessary. Instead, humans should embrace technological advancements and allow them to, in turn, make us smarter. Machine learning has come a long way in recent years. AI algorithms have been honed and perfected, enabling machines to learn and update on their own. While this has affected all walks of life, when it comes to marketing, AI has helped improve the customer experience exponentially. In today’s day and age, consumers expect companies to always be on, and they expect messages to be personalized. AI helps marketers achieve this level of personalization without having to work 24/7. It’s ironic because this automated, mechanical tool is making marketing more personalized and human. ... This is where a marketer's touch and human intelligence come to play. At this point, and in the near future, there will still be a need for a human marketer behind AI tech to help steer the campaign in the right direction. If you're looking for evidence of this, consider the many issues that came to light as programmatic advertising gained traction.
The researchers note that the shape of a room can be acoustically determined from corresponding room impulse responses (RIR), which can be extracted from recorded sound signals. Exploiting this fact, they considered time of arrivals (TOAs), or the time it takes for sound to travel from a source to a microphone. If the TOAs are known, they posited, the distance from the microphone to a target location can be inversely computed. But knowing TOAs isn’t enough, because the distances are unlabelled and acoustic sensors record reflections and echoes in an arbitrary order. To solve for this, the team tapped a four-microphone array (the fourth microphone was used to verify the distances) and used a reflective point, which in this case refers to the intersection between the line from a target spot in the room to a microphone and a potential wall line. If the reflective point and the real sound source were on different sides of a reconstructed or potential wall line, the proposed system treated the target spot as noise and discarded the data.
The latest soft robot from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), however, breaks the mould. Appearing more like a flower than a piece of machinery, CSAIL’s Origami Robot Gripper is a collapsible skeleton that can suck up objects using a vacuum. Its rubber skin aids with grip, allowing the robot to pick up items from any angle regardless of their shape. Currently, ‘hard’ robots struggle with non standard shapes and, unlike the Origami inspired bot, can easily apply too much or too little force. Another benefit of the soft robot is that it is lightweight, and made with relatively inexpensive materials. This means it is cheaper and, thanks to a simple design, less complex to make. And, instead of requiring extensive programming to handle different shapes and sizes, its vacuum can pick up a variety of products from mushrooms to bottles of wine. It’s also capable of lifting 100 times its own weight. Scale up the design, and the Origami gripper could reliably retrieve a whole range of items. The most obvious application for MIT’s model is in groceries, either at physical checkouts or in warehouses.
Despite advice from the FBI that organizations should not pay ransoms, the decision is increasingly being looked at from a cost/benefit perspective. Insurance policies may cover ransoms, and the option may look appealing if the cost of recovery is more than the ransom. And as ProPublica reported last month, some forensics firms that claim to be able to resolve a ransomware infection are actually paying the ransom while passing the cost onto their customers. Plus, there's the vexing question over who is profiting from the ransom. ProPublica traced four ransom payments made by Proven Data Recovery, a firm based in New York. The payments - made to get the decryption key for a SamSam infection - ended up in bitcoin wallets linked to Iran. The city of Baltimore, however, refused to pay a ransom after a recent attack and endured an estimated $18 million in recovery costs. The city was affected by the Robbinhood ransomware, which forced the city to revert to manual processes
Quote for the day:
"Great achievers are driven, not so much by the pursuit of success, but by the fear of failure." -- Larry Ellison