To be clear, the inevitable outcome of low code is not necessarily complexity. Just like traditional application development, complexity can and often does make its way into the lifecycle of the product code base. While not inevitable, it is common. There are many steps you can take to reduce complexity in apps regardless of how they are built, which improves performance, scalability, availability, and speed of innovation. Yes, a low code application, like all applications, can become complex, and requires the use of simplification techniques to reduce complexity. But these issues are not tied to the use of low code. They are just as significant in regular product development processes. What low code does increase is the amount of code in your application that was not written directly by your development team. There is more code that was auto-generated by the low code platform, or included in libraries required for your application to function, but was not the product of your developers. Thus there is often more “unknown” code in your application when you use low code techniques. But unknown is not the same thing as complexity.
Microservices provide several challenges to software engineers, especially as a first step to facing distributed systems. But it does not mean that we're alone. Indeed there are several tools to make our life easier in the Java world, especially MicroProfile. MicroProfile has a goal to optimize enterprise Java for a microservices architecture. It is based on the Java EE/Jakarta EE standard plus API specifically for microservices such as a REST Client, Configuration, Open API, etc. Wildfly is a powerful, modular, and lightweight application server that helps you build amazing applications. ... Unfortunately, we don't have enough articles that talk about it. We should have a model, even the schemaless databases, when you have more uncertain information about the business. Still, the persistence layer has more issues, mainly because it is harder to change. One of the secrets to making a scalable application is statelessness, but we cannot afford it in the persistence layer. Primarily, the database aims to keep the information and its state.
What has made CPaaS the go-to method for customer engagement is the ubiquity of cloud technology and how it has transformed the way businesses operate. “Companies had to come up with different ways to interact with customers,” says IDC research VP Courtney Munroe, who points out that in the last few years there has been a steady move to cloud and, in particular, there has been a confluence of mobility and cloud. “More people use smartphones and companies realised that they could develop apps for them,” he says. Steve Forcum, chief evangelist at Avaya, is also aware of the importance of cloud within enterprises looking to engage with customers. “Some customers may keep elements of their communications stack in their datacentres, but more are then infusing cloud-based capabilities,” he says. “We’ve moved to help customers across this spectrum by bringing cloud-based benefits to their datacentres.” But the technology on its own is in second place to the need that companies have to be more responsive to customers. The underlying drive towards CPaaS is the need to offer a more flexible way to interact with customers.
The most concerning threat is frequently “Will releasing this make it easy for my main competitor to copy this new feature and hurt our differentiation in the market?”. If you haven’t spent time personally engineering ML features, you might think that releasing a model file, for example as part of a phone app, would make this easy, especially if it’s in a common format like a TensorFlow Lite flatbuffer. In practice, I recommend thinking about these model files like the binary executables that contain your application code. By releasing it you are making it possible to inspect the final result of your product engineering process, but trying to do anything useful with it is usually like trying to turn a hamburger back into a cow. Just as with executables you can disassemble them to get the overall structure, by loading them into a tool like Netron. You may be able to learn something about the model architecture, but just like disassembling machine code it won’t actually give you a lot of help reproducing the results. Knowing the model architecture is mildly useful, but most architectures are well known in the field anyway, and only differ from each other incrementally.
Bearing security in mind at all times rings true, as it inspires us to think about what the security implications are as we are making changes. On the other hand, it has something of a resemblance to the old premature performance optimization debate. We’re not going to wade into that here (or the test-driven development debate, or any other similar one). I just want to point out that software development is latent with complexity and obstacles to action. Security considerations must be harmonized into the equation. The next bullet point in the fact sheet makes the following statement: “Develop software only on a system that is highly secure and accessible only to those actually working on a particular project.” This one makes the reader pause for a moment. It seems to have arrived at the conclusion that in order to build secure systems, we should build secure systems. If we are patient, the next sentence helps deliver the full meaning: “This will make it much harder for an intruder to jump from system to system and compromise a product or steal your intellectual property.” What the framers of this fact sheet are driving at here is actually something like a rephrasing of zero trust architecture.
The impact of this legislation depends entirely on the usefulness of the taxonomy itself, says Jennifer Fernick, senior vice president and global head of research at security consultancy NCC Group. "The authors of that taxonomy need to meaningfully answer what data points about cybercrime will enable meaningful intervention for the future prevention of these crimes," Fernick, who is also a National Security Institute visiting technologist fellow at George Mason University, tells Information Security Media Group. "It is important, for example, to distinguish at minimum between computer-related crimes that attack human judgment or exploit edge cases in business processes from crime that is enabled through specific hardware or software flaws that can be exploited by criminals attacking an organization's IT infrastructure. In the latter case, it would be valuable in particular to identify the specific software or hardware components, or even specific security vulnerabilities or CVEs, which served as the substrate for the attack, to help inform organizations about where they would most benefit from strengthening their cybersecurity defenses," Fernick says.
Using smart data capture on mobile devices has multiple benefits. Unlike fixed scanners, it enables customer service agents to perform multiple tasks anywhere in the airport. Airlines can automate processes such as check-in, security queues, lounge access, and luggage management, providing a modern, sleek impression from the first moment a passenger enters the terminal. Compared with the old approach of using rugged devices at fixed stations, smart data capture on mobile devices delivers significant customer benefits and staff efficiencies. Airport queues have been big news recently, but with staff equipped with smart mobile devices, waiting times can be cut as they can patrol queues and scan IDs, passports and QR codes to speed passengers through check-in and deliver a more personalised experience — accessing details about a passenger’s seat preferences or dietary requirements, for example. Customer service agents using smart mobile devices can easily manage oversized luggage presented at the gate and quickly check it into the hold.
Basically, the value of decentralized cloud in its current form boils down to the circumstances and needs of the users. “If you’re setting up a mining node and need some cloud power, why would you want to pay AWS?” Litan asks. A decentralized cloud might be cheaper to run in such cases, she says, which appeals to miners who want cheap computing in order to make money on the margins. At the moment, when many developers write applications, they look to the most readily available cloud service, Litan says, and then wind up deploying on the main blockchain where there is no control over where Ethereum or Bitcoin run. “It’s like saying, ‘Where’s the internet running?’” There is some possibility for blockchain and decentralized cloud to gain more momentum down the road, but for now their impact on the entirety of cloud computing remains rather niche. “It may become more important as people start writing compute-intensive workloads and they want to keep the cost down,” Litan says. Decentralized cloud computing may also be useful for organizations running non-blockchain applications, she says.
It can be difficult to drive growth when teams are stretched and global tensions are high, as they have been for the better part of two years. New process adoption can meet resistance from employees who are already overwhelmed. If and when this happens, a stalemate often follows, and team leaders opt to wait it out, deferring change to another team or another time. ... The pandemic challenged us all to rethink the way we work. Investments in software took the place of physical office space, and teams were pushed to automate repeatable tasks to maintain a pre-pandemic level of efficiency. With the implementation of artificial intelligence and machine learning, workflow improvements can be expedited, lessening the need for as many employees. Technologies like low-code and no-code are easing the burden felt by developers by enabling employees outside of IT to build systems unique to their needs without the slowdown created by a backlog of IT tickets. In turn, this frees the bandwidth for developers to turn toward other pressing concerns like security.
This idea was brought to life when I interviewed Bracken Darrell, the CEO of Logitech International, a computer peripherals manufacturer headquartered in Switzerland and the US. In that conversation, he shared with me the story of how, about five years into his tenure at the company, he asked himself one Sunday night, “Am I the right person for the next five years? On paper, he certainly was, he told me, given that all his changes at the company had lifted the stock about 500%. “On the other hand, I had been involved in every single personnel and strategic decision,” he said. “My disadvantage was that I knew too much, and that I was too embedded in everything we were doing. I just thought to myself that I might be done.” So he decided that night that he was going to fire himself, but he would sleep on the decision. The punchline is that he didn’t fire himself, but he did wake up the next morning with a sense of clarity of what he needed to do: “I have to rehire myself but have no sacred cows. It was super exciting and fun, and I started changing things that I had put in place. Fortunately, I didn’t have to change things radically, but I felt new again.”
Quote for the day:
"Risks are the seeds from which successes grow." -- Gordon Tredgold