There are plenty of paths to becoming a cloud architect — if you’re still early in your career, you might want to consider a formal degree program. But for those with experience in IT, Gartner suggests IT pros with the following skills and experience will find the transition easiest: Hilgendorf notes that the role of a cloud architect is “a new form of an enterprise architect,” and that it’s an easy transition from enterprise to cloud architect. However, the report cautions it’s best suited to those with “real hands-on program leadership”; Those with virtualization or infrastructure architecture experience are often a good fit for the cloud architect role, since “many cloud programs begin with simple IaaS projects, and virtualization architects are best-positioned to understand the technical nuances of a ‘virtualizationlike’ environment,” says Hilgendorf; Some of the biggest issues with cloud adoption arise with integration across the company. Integration architects are adept at working with complex systems, and they’re typically skilled at working across departments; and Employees who are known for rocking the boat or pushing the envelope with technology can serve as valuable liaisons to encourage company buy-in to new cloud technologies.
Each node contains the exact same data and transaction history, and that information is secured with cryptographed hashes and digital signatures. Combined in a shared peer-to-peer network, the nodes create a distributed ledger system where each node has equal rights. Furthermore, they are not dependent on each other; if one node leaves the network, the others still function because they have the same secured data. The savings result from not having to deal with intermediaries or third parties, such as servers, which transmit the info back and forth, waiting for authentication and verification each time. Now, apply this model to data storage. By decentralizing storage, it no longer exists on a server, rather across a network of shared ledgers, each containing the same encrypted data. From a security standpoint alone, this has significant ramifications. Data breaches and hacks have typically focused on a centralized database — either on premises or in the cloud. Once the database or server is hacked, business is at least temporarily brought to a halt. In the blockchain model, if an attacker was able to breach one node, the others would still function, and so business continues. The same principle applies if there is a power outage.
The history of innovation in resource-constrained countries around the world shows that limited resources do not restrict innovation. Frugal innovation in underdeveloped countries has sparked major products, such as the ChotuKool refrigerator launched in India a few years ago for only about $50 (instead of the typical $500). Many mothers in the Mumbai slums, who had never had refrigeration, have bought this battery-operated refrigerator, and it has significantly changed their lives. ... Three recent innovations at Prysmian Group, the world’s largest manufacturer of energy and telecommunications cables, exhibits the kind of frugal digital innovation that can take place with constrained resources. Using a two-person internal team in collaboration with university the Politecnico di Milano, the Innovation Lab within the IT Department of this Milan-based company developed each of these innovations for less than 100,000 Euros, saving millions on each one. Among the first ideas created by the Prysmian Group’s new Innovation Lab was a drone-based monitoring system for inventory tracking in the company’s warehouses. Every facility stores hundreds of cable products, each weighing thousands of pounds.
One of the first challenges an IT operations team encounters with a move to the cloud is licensing. When cloud bursting is involved, the licensing and payment models become especially complex. Also, not all on-premises applications and services are designed for the cloud, and some enterprises overpay as a result. Enterprises must also deal with the human element during a cloud migration. With an on-premises deployment, internal operations personnel monitor performance, as well as manage resources, updates and patches. However, once workloads move to the cloud, the provider will take over some of these tasks. But don't start downsizing just yet. Staff can find new roles, including working with cloud vendors to make sure applications integrate well with existing on-premises systems. In an on-premises environment, dev and ops teams define an application's resource requirements and then monitor that application to adjust those resources over time. Capacity management for physical server workloads was pretty straightforward -- with mostly linear growth -- but cloud adds a new set of complexities that could cost enterprises money.
By exploiting vulnerabilities in the internet-connected cameras from Axis Communications, researchers at security firm VDOO found that remote attackers could take over devices using just the IP address and without previous access to the camera or its login credentials. The vulnerabilities have been disclosed to Axis, which has updated the firmware of all the affected products in order to protect users from falling victim to an attack. In a blog post, VDOO states that "to the best of our knowledge, these vulnerabilities were not exploited in the field". In total seven vulnerabilities in the cameras were discovered and researchers have detailed how three of them could be chained together in order to provide remote access to the cameras and execute remote shell commands with root privileges. These include providing access to the camera's video stream, the ability to control where the camera is looking and to control motion detection and the ability to listen to audio. There's also the potential for cameras exploited in this way to be used as an entry point in the network for a wider attack, as well as the possibility of the camera being roped into a malicious botnet.
"Security is a huge, huge issue when you have remote workers," Carroll emphasized. Working off of a home network introduces all kinds of risks and vulnerabilities to your work files. Even if you've never had a cybersecurity issue on a personal device with your home network, that doesn't mean you are always safe."It's harder to maintain and control, because when people are working remotely, depending upon who their ISP is, their internet service provider, that opens up other probabilities and introduces other variables that aren't necessarily there when you're within a confined network within a workplace," Ryan confirmed. A Virtual Private Network (VPN) and multi-factor authentication are the viable solutions, Carroll said. With a VPN, a private, encrypted channel is connected between your device and a VPN server. No one but the user and the VPN sees or accesses the information, not even internet service providers! Multi-factor authentication is widely popular and helpful too. The user must provide at least two separate pieces of evidence proving their identity, which will then gain them access to the respective site.
Ever since relational databases were proposed, I have been puzzled as to why this seemingly bizarre architecture has been allowed to persist. This is like having your filing department speaking a foreign tongue so that all instructions have to be written down in this language. But it’s worse. When you store that timesheet in a relational database you have to totally take it apart, with the header information in one table and all the detail lines that assign hours to projects as separate rows in another table. You have to take apart the form and construct the SQL that takes those bits and stores them. Oh yes, and make sure you put sequence numbers on all those detail lines in the timesheet if you want to get be able to get them back in the same order. When you want the form back, you have to write SQL instructions to join the tables together and then you have to pick out all the timesheet information from the returned results and put it together as a form.
The goal of this pattern is to improve the modularity of your application by removing the dependency between the client and the implementation of an interface. Interfaces are one of the most flexible and powerful tools to decouple software components and to improve the maintainability of your code. ... All of these principles enable you to implement robust and maintainable applications. But they all share the same problem — at some point, you will need to provide an implementation of the interface. If that’s done by the same class that uses the interface, you will still have a dependency between the client and the implementation of the interface. The Service Locator pattern is one option for avoiding this dependency. It acts as a central registry that provides implementations of different interfaces. By doing that, your component that uses an interface no longer needs to know the class that implements the interface. Instead of instantiating that class itself, it gets an implementation from the Service Locator. That might seem like a great approach, and it was very popular with Java EE, but, over the years, developers started to question this pattern.
There’s some history to it but the short answer is you can’t build a world-class security architecture today without leveraging the network. That’s where the world has evolved to. A number of years ago a lot of security was about protecting the enterprise, and it still is. You should block everything you possibly can, but you can’t keep everything out. Everybody knows that. You can’t block everything. If you can’t block everything, there’s going to be something in your network. Now there’s something in your network, and the network is a pretty good place to defend and to look for it. There are several things the customers need to do. One is they need to what I call ‘constrain the operational space of the attacker.’ If somebody gets into your network through compromised credentials, which is a very prevalent technique. I get your credentials, and I can get in your network. You want to isolate them to only the part of the network they have access to. That’s segmentation. It turns out that’s the first thing we automated with DNA Center was software-defined access, which is like software-defined segmentation. It helps you protect your network. The problem with segmentation is it’s hard to implement, so we automate it.
With blockchain, Lubin claimed, “The business model of exploiting people [and] personal information is going to change. I think it’s going to be even better, potentially, for those companies. They’ll be less exposed to the risk if we are controlling our own data, encrypted, and enabling it to be selectively disclosed in situations that we designate.” Imagine owning all your digital health care records and granting providers or insurance agents access only to the data of your choosing. These kinds of experimental blockchain technologies will require cautious and careful experimentation worth investing in. But the goal shouldn’t be “hyper growth” and fast returns on those investments. There are lots of questions about how blockchain will work in the wild. How should it be regulated? Can it run without consuming vast amounts of energy? Is the technology mature enough to really go mainstream? A couple months ago, I discovered another use for blockchain and a project that might be ready for prime time. A collective of developers and journalists are launching a radical blockchain experiment called Civil.
Quote for the day:
"A point of view can be a dangerous luxury when substituted for insight and understanding." -- Marshall McLuhan