Top 10 Myths about Software Product Development
As businesses are embarking on the digitalization journey, software solutions have become a central point of all business operations.
According to Grand View Research, the business software and services market earned a revenue of $389.86 billion in 2020. This value is expected to grow at a CAGR of 11.3% between 2021 and 2028.
Right from the time when FORTRAN was released in the 50s, the software industry has been a favourite vertical for many.
At the same time, several myths and misconceptions are floating around this space.
Here are the top 10 myths about software product development.
1) The Most Popular Language is the Best One
Each developer has a favourite programming language and obviously, it will be the one that he is working on. The general notion is that the most popular programming language is the best in the business. However, it is not true. Different languages serve different purposes. You can’t rank one over the other. While choosing a programming language, consider certain aspects such as business requirements, existing technology stack, developers’ expertise, license and usage costs etc.
2) Coding Knowledge is enough to Build a Product
Most novice and inexperienced developers believe that coding knowledge is enough to build a product. While it is true that you need to learn the code to build a product, software development is not just about knowing how to write the code. You need to have domain knowledge, understand the subject area, think from a customer/user perspective etc. You should be able to think beyond the IT space.
3) Software Development is Expensive
Considering the fact that software engineers are highly paid in the industry, small and medium businesses tend to purchase generic software instead of choosing custom application development. However, a ‘one-size-fits-all’ solution doesn’t suit today’s dynamically changing IT era. When business and user requirements change, realigning IT solutions to meet these needs becomes a challenge. Moreover, when the company grows, you’ll have to rewrite the software.
4) Latest Tools are not always the Best
Often, people believe that using the latest cutting edge tools will make their technology stack robust, powerful and efficient. However, it is not always true. The criteria in choosing a software tool should be the performance, functionality, features, being future-proof, adaptability etc. and not on its popularity. If the tool is not backward-compatible, you’ll have to rewrite your app every time there is an update to the code. So, popularity is not a silver bullet.
5) More People in the Team means Faster Time to Market
In today’s fast-paced world, businesses are required to deliver products faster. As such, people tend to hire more developers to quickly get the work done. Especially, organizations hire more software engineers when the project fails to meet the deadlines. However, adding software engineers to the team doesn’t always expedite the process. It can sometimes become a bottleneck, owing to communication and collaboration issues. A better way is to streamline and orchestrate operations, design the right CI/CD pipelines, apply automation etc.
6) The Project is Done once it goes Live
A software product development project involving various phases such as planning, design and development, testing and deployment. A common notion is that the project is done once the app is uploaded to the app store. However, it is not true. Once the app is available for users, you should monitor the performance, collect feedback and apply changes and updates as and when required. When you don’t update the app for a long time, it might get removed from the app store. In today’s customer-centric product environment, a software engineer job will end only when the app ceases to exist for users.
7) Remote Software Development is Expensive
One common myth in software development circles is that outsourcing software development projects to remote teams incur huge expenses. When you outsource a project to a 3rd party, you will get access to highly experienced professionals. At the same time, you don’t have to deal with the complex hiring process, HR issues, insurance, labour benefits etc. It means you will get the best talent and only be paying for the technical expertise while receiving a quality product in return. The key here is to choose the right outsourcing company for your IT needs.
8) Agile development methods are Complex to Handle
While the IT world is rapidly innovating, agile and DevOps methodologies are becoming an inevitable option. However, some organizations are apprehensive to embrace these methods as they feel it will be difficult to manage cross-functional teams. Implementing a cultural change across the organization is another challenge. So, they continue with the waterfall development process. While the waterfall method seems easy at the beginning, you will end up struggling with flexibility, adaptability, mobility and UI/UX issues once the app is launched.
9) Quality Tools build Quality Products
Often, people believe that choosing a high-quality tool will help them to build a quality product. However, the quality of the product doesn’t depend only on a tool but requires critical thinking, analysis, project planning, communication and collaboration and coding skills etc. Choosing the right tool makes your job easier.
10) Outsourcing is a one-stop solution for all IT problems
Outsourcing is a popular method for organizations to get things done. However, a popular myth is that outsourcing is a one-stop solution for any IT problem and it solves all issues which is not true. Outsourcing is basically done so that the organization can focus on their core processes while the outsourced company handles their IT needs. However, outsourcing comes with its challenges. It is important to choose the right engagement model. When you choose a fixed price model, you might experience service level issues and quality issues. Choosing a dedicated team model is good but it again depends on the company that you select for that project.
Conclusion
Software development is a popular industry that is always evolving. Today’s innovation is tomorrow’s legacy. So, organizations should proactively monitor IT trends and customer requirements and adapt them quickly. Businesses that take application development seriously are sure to surge ahead of the competition.
Top 5 Things to Know before Starting Product Engineering
As businesses are transitioning from a product-driven business model towards a customer-centric development approach, product engineering is rapidly gaining prominence in recent times.
Product engineering services integrate software development services with product management, enabling organizations to align user requirements and user experience with business requirements and objectives while optimizing costs.
As your products define your company, it is essential to implement product engineering in the right way to build quality products faster and better.
Here are top five things to know before starting product engineering.
1) Product Engineering Roles and Responsibilities
Before jumping onto product development, it is critical to understand the difference between product engineering and product development. Though both terms look similar from a general perspective, they differ in roles and responsibilities. While product development is a broader term that involves every phase of the product lifecycle, product engineering is part of this product development that combines software engineering and product management. Be it an app, software or a business system, the role of a product engineer is to cost-effectively design a customer-centric product, implementing the right technology stack and methodologies while aligning them with business objectives and goals. It involves planning, design, development, testing, deployment and sustainable maintenance of the product.
2) Full-Stack Engineer is Different from a Product Engineer
Often, a full-stack engineer is confused with a product engineer. A full-stack engineer is responsible for developing the back-end and front end systems and integrating them via APIs. However, their role is limited to managing software, tools and human resources within specific work frames. However, product engineers have a broader role to play, from designing and deploying the product to ensuring that customers are satisfied with the product. When you clearly identify this thin line and rightly define roles and responsibilities, product engineering becomes efficient, easy and cost-effective.
3) The Importance of the Right Product Development strategy
Product engineers are responsible for delivering a quality product while considering development timelines and budgets. So, defining the right strategy for product development is the key. Firstly, consider the operational aspect of the product such as its efficiency, consistent performance, security, usability and costs etc. Secondly, mobility is a key requirement in today’s cloud and mobile era. So, consider portability, adaptability, reusability and interoperability. Thirdly, a modular design offers flexibility, self-repairing/self-healing and scales easily. You can easily perform maintenance tasks. With the right development strategy in place, the project becomes easily manageable.
4) Drive Innovation with Calculated Risks
Innovation is a key ingredient of a product that not only adds value to the product but also helps you to sustain the competition. However, innovation comes with certain risks. So, researching, collecting and analysing relevant data, analysing the future functionality of the product, identifying market gaps in that area and coming up with the feasibility of the product is recommended. Before moving ahead with the development, it is important to document specifications requirements. It can be done during the time of preparing a roadmap and designing the architecture of the product.
5) Seamless Collaboration across Teams
When it comes to product development, different teams have different goals and metrics. Product engineers envisage a product that scores high in quality, usability and durability while delivering better functionality. On the other hand, designers are concerned about the aesthetics and appearance of the product. Delivering a higher user experience is their key requirement. While developers love adding more features, the network guys are against it. Their motto is that fewer changes mean more stability. As such, collaborating with various teams and coming up with a common plan helps you strike the right balance between features, design and stability and expedite the product while increasing the quality as well.
The business world is rapidly evolving, putting constant pressure on product development teams. Today, businesses should proactively monitor changing market trends and realign their IT solutions accordingly. Product engineering services help you to cost-effectively build quality products faster while ensuring higher customer satisfaction. Businesses that ignore product engineering are sure to stay out of competition.
How can businesses make profits with a low-code/no-code approach?
The year 2021 seems to be the year of low-code / no-code app development. Though the low-code approach is still in its nascent stage, businesses are already reaping benefits from it.
The month of June has already seen some interesting announcements related to low-code / no-code app development.
Mendix Shows the Way
A notable announcement came from Mendix on June 8, 2021, stating that Dutch’s largest insurance company TVM has partnered with Mendix to develop Bumper, a low-code app that accelerates damage claims processes.
When a vehicle meets with an accident and gets damaged, you can instantly add the damage details into the app and get a detailed report of the damage. It helps you to smartly process damage claims while giving you insights into the process in real-time. As such, damage claims are quickly and efficiently processed while delivering high customer satisfaction.
Amazon Web Services (AWS) enters the Low-code Arena
On June 17 2021, Amazon Web Services (AWS) announced the launch of Workflow Studio, a low-code app development tool that enables organizations to quickly build applications with minimal coding skills on its public cloud platform. So, how do businesses benefit from this AWS low-code / no-code offering?
AWS cloud infrastructure accounts for a majority of cloud usage across the globe. As most businesses run their cloud networks on AWS, it becomes easy for them to take advantage of low-code solutions. While developers use this platform to quickly build applications, business teams with zero coding knowledge can create their apps for day to day business activities. Low-code apps offer faster time to market, reduce development costs, HR-related costs, office footprint etc. It also helps businesses in dealing with the shortage of qualified software professionals.
UI / UX gets better with Infragistics
Low-code or no-code app development platforms focus on enabling users to quickly build apps without writing lengthy code. Regardless of the level of coding, users can quickly build apps using drag-n-drop tools. However, User interface (UI) and User experience (UX) has been a concern for businesses. Currently, low code platforms help you to convert your idea into a prototype while not concentrating on the user experience. Infragistics is now filling this gap.
Cross-platform UI/UX toolmaker Infragistics released a new product Infragistics Ultimate 21.1 on 17th June 2021 which aims to deliver the right UI/UX designs while building low-code apps. It helps business teams to build highly intuitive dashboards with the right UI/UX design. Currently, the IT industry is in shortage of experienced UI/UX professionals. The average salary of a UX designer in the US is $96,529 along with a cash bonus of $5000 per year, as reported by Indeed. San Francisco is the highest paying state for UX designers, paying $140,975 per annum. With Infragistics Ultimate 21.1, organizations can incorporate UI/UX designs into the apps using pre-built templates and tools. As such, businesses can save huge amounts on UI/UX professionals’ salaries while also overcoming the shortage of experienced UI/UX professionals. It also expedites software development projects. As such, businesses deliver a much better customer experience.
Looking at the entrance of IT giants into this segment, it becomes evident that low-code is not just a business hype but is delivering results. So, organizations need to tap these business benefits at the earliest.
Here are 5 important areas wherein businesses are making profits with low-code development:
BizDevOps
BizDevOps is a new buzzword in the development circles in recent times. Low-code app development extends DevOps, incorporating business staff into cross-functional teams to develop customer-centric apps. When a team has a clear understanding of the value stream of the project, customer end-to-end lifecycle, company strategy and business objectives, quality products are built faster and with reduced costs. Shadow IT can be effectively controlled.
Accelerate your Microservices Journey
As businesses are moving away from monolithic systems towards a microservices architecture, low-code app platforms accelerate this process by enabling you to quickly re-architect monolith functions into microservices via APIs. You can start with low-risk apps that highly impact your business processes.
Self-serving customer-centric portals
Business teams that are involved with customers know what customers need from a business. Searching for the company services, getting a quote, paying bills, getting an answer is a few of them. As such, business teams without coding knowledge can quickly build a self-serving web portal to address customer-specific needs. In addition, companies can quickly build a mobile app and serve customers.
Optimized Costs
Low-code / no-code app development platforms eliminate the need to hire expensive software engineers. With low-code platforms, you can quickly and cost-effectively build and deploy business applications with ease. Advanced features and integration tasks can be handed over to senior developers. That way, you can reduce the software team size and the office footprint. While it saves operational costs, you don’t have to go through the tedious hiring process. Bonuses, insurances and HR-related compliances can be avoided too.
Customer satisfaction is the key
Apps build on low-code platforms are highly customer-centric as they are built by people who interact with customers. Often, sales guys complain about the inefficient processes designed by IT teams that will make the customer leave away before closing a sale. When the sales guy creates the app, he knows what should be included and what shouldn’t. So, businesses can make more sales and generate revenues. More satisfied customers mean repeat business and new references as well.
Several companies have already started to benefit from low-code app platforms. What about your organization?
Why is Cloud Native App Development the future of IT?
“Change is the law of life and those who look only to the past or present are certain to miss the future” - John F. Kennedy
The above quote is apt for this cloud computing era. Today, businesses are looking at the past and changing their current IT operations accordingly.
However, it is important to look at the future to stay in and ahead of the competition.
The constant change that happens in the IT landscape has accelerated with the advent of cloud computing.
As every IT product or resource is delivered over the Internet as a service, it is high time that software developers realign their software development strategies to suit the cloud landscape.
Cloud native app development is the right approach to make your businesses future-proof. The covid-19 pandemic that pushed businesses into a work-from-home environment compliments the cloud native app development.
What is Cloud Native App Development?
Cloud native app development means different things for different people. On a simpler note, it is an approach to building future-proof cloud apps that take advantage of cloud processes and platforms to deliver a consistent user experience across all devices, cloud models and environments.
Portability, high scalability and adaptability are the three key aspects that are driving cloud-native app development in IT circles in recent times. While business processes are rapidly changing, businesses are required to quickly adapt to these changes and build cloud native apps. Secondly, these apps should deliver a consistent user experience across a range of devices which means portability is the key requirement. They should be scalable enough to meet traffic spikes. Cloud native app development offers these 3 key qualities to IT processes.
Here are some key components of cloud native apps:
Microservices Architecture
Microservices architecture is a type of software architecture wherein complex applications are built as small, loosely coupled, independent and autonomous services that perform a specific task and communicate with each other via APIs. It is a variant of Service-Oriented Architecture (SOA) that enables developers to quickly build and deploy applications.
Microservices architecture allows businesses to quickly adapt to changing IT requirements as applications built using this architecture are flexible and easily extendable to suit different IT environments. So, you don’t have to code apps from scratch for each IT environment. You can begin small and massively scale up within a quick time. Moreover, these independent services allow you to scale specific services instead of scaling the entire app. The biggest advantage is that you can customize your technology stack based on your cloud environment without getting stuck with a standard approach.
Containers
As applications are centralized hosted and delivered over the cloud, portability becomes a key requirement. Containerization enables you to virtualize the operating system and run applications inside containers. A container is a portable computing environment comprising binaries, libraries, dependents and other configuration files required by an application. By using software containers, businesses can easily run applications on various environments such as mobile, desktop, cloud, bare metal, virtual machines etc. Software containers bring a greater level of agility, portability and reusability that are important for cloud native applications.
Software-Defined Infrastructure
As cloud services are centrally hosted and accessible from any location, administrators should be able to manage the infrastructure from anywhere as well. Software-defined infrastructure virtualizes hardware infrastructure, enabling you to automatically add, delete, stop and start any network resource using software from any location. By implementing software-defined infrastructure, cloud native apps can be easily managed from any location.
Application Programming Interface (API)
Application Programming Interface (API) is an interface that facilitates communication between different applications or services. As cloud native apps are built as multiple services, they use APIs to communicate with each other as well as with other 3rd party applications. For instance, if you want to add multiple languages to your app, you can use the Google Translate API without writing the code from scratch.
DevOps
As cloud native apps use the microservices architecture to build services as small and incremental blocks, continuous integration, continuous testing and continuous deployment becomes a key requirement. DevOps helps you to rapidly build and deploy quality cloud native apps.
Why Cloud Native App Development is the Future?
As businesses are aggressively embracing cloud technology, cloud native apps are turning out to be a beneficial option. Cloud native apps are faster to market and minimize risks. They can be easily deployed and managed using Docker and Kubernetes. Along with fault tolerance, they are capable of self-healing for most issues.
As these apps use a modular design, developing them is easy and cost-effective. Different teams can separately work on each service. Most importantly, when these apps are deployed, you can turn off some services that are not running. That way, you can significantly save operational cloud costs. The serverless and open-source model allows you to optimize the pay-per-use subscription model by reducing the computing time to milliseconds. You can scale up specific services too. Cloud native apps allow you to implement an auto-scale feature that automatically scales specific services without manual intervention. This is why most enterprises prefer cloud native apps. The downtime for cloud native apps is minimal as they can quickly pick up on alternation regions when a server goes down.
As most mobile apps use web-centric programming languages such as Python, PHP, JavaScript, Ruby, cloud native apps that are built on similar environments would perform well and deliver a consistent user experience. Now, developers don’t have to worry about the target environment but focus on business requirements and features. Adding new features or making changes to the app is easy as well. Enterprises love cloud native apps as they are easy to monitor and manage using apps such as AppDynamic, NewRelic etc. Similarly, you can debug them using apps such as Splunk and ElasticSearch.
Challenges with Cloud Native App Development
Cloud native app development comes with certain challenges as well. The biggest challenge is the presence of hundreds of services. Developers should be careful while handling and integrating all these services. They should also keep an eye on the size of the service. It is recommended to minimize the number of services wherever possible.
Secondly, data security and storage requires careful attention. As enterprise run containers on immutable infrastructure, the entire internal data goes off when you shut down the app. So, you should make sure that the data is securely stored. In addition, when an app uses APIs of a specific cloud platform, you should be careful to manage that API while migrating to another environment. Moreover, protecting data from unauthorized access is important.
As the cloud becomes an integral part of business processes, choosing cloud native app development helps you keep your infrastructure future-proof!
Accelerate Digital Transformation in your Organization with Low-Code/No-Code Application Development
Low-code or No-code app development is a method of creating code using a visual application development environment wherein users can drag n drop components and connect them to build applications of all types.
Beacon technology for Asset Tracking
The advent of Internet of Things (IoT) has not only revolutionized IT networks but it also paved way for a range of new and innovative technologies. The beacon technology is one among them. Right from the time when Apple introduced the beacon technology in 2013 to till date, the technology has greatly evolved and is getting better every day. While the beacon technology was initially used by retail businesses, its functionality is now being extending to every field. Asset tracking with beacons is the new trend. Using the beacon technology, businesses are now able to implement cost-effective and highly scalable asset tracking solutions.
An Overview of Beacons
A beacon is a small Bluetooth-based device that is used to continuously transmit radio signals. This small form factor device contains a small radio transmitter and a battery. It uses the Bluetooth Low Energy(BLE) protocol to transmit data. As BLE consumes low energy, you can run beacons without draining out the battery. Depending on the size and functionality of the device, beacon batteries can last from 6 months to 5 years. However, it transmits low amounts of data which means you can’t transfer audio or streaming data.
The beacon technology is similar to Near-field Communication (NFC) technology. However, the difference lies in the range. While NFC functions within 8 inches, beacons can work within a range of 70 meters.
The State of Beacon Market
Beacons have become the first choice for many companies when it comes to Real-Time Location System (RTLS) solutions. According to Grandview Research, the global Bluetooth beacon market is expected to reach $58.7 billion by 2025, growing at a CAGR of 95.3% between 2017 and 2025. Similarly, Allied Market Research reports that the global beacon market would reach $14.839 billion by 2024, growing at a CAGR of 61.5% between 2018 and 2024. GM Insights reports that the market value of beacons was $170 million in 2016. This value is expected to grow at a CAGR of 80% between 2017 and 2024. The retail industry is the largest market for beacons followed by the health sector.
Source: https://www.statista.com/statistics/827293/world-beacons-technology-market-revenue-by-end-user/
Analyst firm Statista reports that the global beacon market was valued at $519.6 million in 2016. This value is expected to reach $56.6 billion by 2026, growing at a CAGR of 59.8% between 2016 and 2026.
The Technology behind Beacons
Beacons perform a single task. They just send a radio signal at pre-defined intervals. BLE-enabled devices such as smartphones receive these signals and act accordingly. Each beacon is assigned with a unique identifier. So, the device transmits this unique identifier which enables the receiver to identify the location of the beacon as well as the location of the user.
A beacon contains a small ARM (Advanced RISC machines) computer, Bluetooth connectivity module and a small battery. This small CPU runs the firmware written in low-level programming that controls the behaviour of the beacon. As the beacon’s job is to transmit its identifier, the small CPU power and battery is more than sufficient to process this data or encrypt the identifier. Inside the beacon you will find a small antenna that transmits electromagnetic waves. It uses the Bluetooth protocol. The latest Bluetooth standard is 4.2. The normal frequency of the beacon radio waves is 2.4 GHz and the maximum data limit for the 4.2 standard is 257 bytes. With such small amount of payload, beacons transmit the UUID, major, minor and the signal power. The receiving devices calculate the proximity of the beacon based on the transmitted signal power.
Beacon transmits the following components
Universally Unique Identifier (UUID): It is the unique identifier that differentiates your beacons from other devices outside your network.
Major Value: It is the unsigned integer value that tells you about the group in which the beacon is placed. For instance, beacons installed in the 1st floor will have the same major value. The value can be anything between 1 and 65535.
Minor Value: It is the unsigned integer that differentiates a beacon from a group. The value falls between 1 and 65535.
Here is an example of a UUID:
f626db6-3ga2-4e98-8013-bc5b71f0983c
When you talk about a beacon, you think about a physical device. However, some smartphones can act as a transmitter as well as a receiver. For instance, Apple doesn’t offer any physical beacon. It has incorporated the beacon technology into iOS 7 operating system. With more than 200 million iOS 7 devices in the market, Apple already has a considerable amount of beacons in the market.
How are beacons useful?
Beacons don’t relay any important message. They simply relay their IDs. It is the job of the receiver device to apply this information into a useful solution. For instance, a retail mall installs beacons inside the mall. When a customer visits the store and browses the electronic section of the mall, the beacon installed at that place will transmit its ID. The app in the smartphone of the customer will receive the ID and identifies the location of the customer. In this case, the app identifies that the customer is at the electronics section. So, the app will send discounts and offers related to the electronic products of that mall. Moreover, these offers would be specific and customized for that customer.
Asset tracking with Beacons
Asset tracking and management is a key requirement for any industry. Beacons can significantly reduce the cost and complexity of this job. There are multiple ways to track assets using beacons. For instance, you can mount BLE receivers in a permanent fixture and tag assets to beacons. When an asset comes into proximity of a BLE-enabled receiver, it tracks the movement via mobile data or Wi-Fi and logs the data. You can either take action or store the information for management and analytics purposes. Using beacons, you can cost-effectively track thousands of assets in real-time, 24/7.
There are instances wherein you cannot mount BLE receivers in permanent fixtures in temporary locations such as conference halls or function halls. In such cases, you can fix beacons in different places and track assets using a mobile app. By tagging assets to beacons, you can track each asset from the mobile app. Implementation is easy as there is no need for wiring or costly installation.
For more accuracy and maximum coverage, you can augment the beacon setup with additional receivers. You can install fixed beacons and fixed BLE receivers and augment them with moving beacons and moving BLE receivers. This setup can be extremely useful in low-signal areas such as ICUs of a hospital or a high-security airport check points. In areas such as large construction sites, environments quickly change. As such, you need a dynamic beacon architecture. In such cases, you can complement beacons with GPS and Wi-Fi. Depending on your environment, business type and requirement, you can choose the right deployment beacon technology.
Asset tracking with beacons is quickly gaining traction. Reports show that beacons have significantly reduced operational costs of asset management tasks. According to a Proximity Directory report, a total of 15,176,500 proximity sensors were installed globally in Q2, 2017. And, asset tracking with beacons is saving billions of dollars for the $9.1 billion logistics industry. Similarly, the health industry can save hundreds and thousands of dollars with an ROI of 275% by using asset tracking with beacons.
The advantages of beacons are enormous. Using beacons, you can track every item in the warehouse, track vehicles within the infrastructure, track equipment and machinery in a healthcare location, track luggage trolleys in airports, railway stations etc. In addition, you can track people/employees by giving them BLE-enabled devices.
Bluetooth 5.0 is offering additional capabilities in the form of 2x speed, 800% more broadcast messaging capacity and 4x range. As such, beacons are sure to disrupt the RTLS solutions in the coming days.
Hybrid Cloud Architecture with CISCO CSR 1000v
Cisco CSR 1000v series is a router software appliance from Cisco. It provides enterprise routing, VPN, Firewall, IP SLA, and more.CSR 1000v can be used to connect multiple VPC across all-region in AWS Cloud and on-premise networks. Thus it can be used avoid managed VPN service from AWS.
In AWS, you can find Cisco CSR 1000v in AWS marketplace which has 30 days free trial to test it out. AWS Marketplace for Cisco. Be aware this is not cheap, it will cost you EC2 Instance charges. All instance types are not supported for CSR 1000v. It supports only m3 and c3 instance family types.
Cisco CSR 1000v Can be used in various network models in cloud like Transit VPC, multi-cloud Network.
Following is the Architecture I have used to connect multiple VPC.
The two VPC’s are one in N.Virginia region and other is in Ohio Region. And Each VPC has Internet Gateway and were connected over VPN. On Ohio region, we used AWS managed VPN service to connect VPC in N.Virginia region VPC. And On-Premise Edge Router we used Cisco RV110W small business router. In this Post, I would like to mention the steps to follow to establish VPN over two VPC’s spread in two different regions in AWS.
Steps to create VPC’s in two regions:
- Create VPC in N.Virginia Region with CIDR 10.0.0.0/16 and attach Internet Gateway to it. you can do it from CLI or through the management console.
aws ec2 create-vpc --cidr-block 10.0.0.0/16 --region us-east-1 Output: { "Vpc": { "VpcId": "vpc-848344fd", "InstanceTenancy": "dedicated", "Tags": [], "CidrBlockAssociations": [ { "AssociationId": "vpc-cidr-assoc-8c4fb8e7", "CidrBlock": "10.0.0.0/16", "CidrBlockState": { "State": "associated" } } ], "Ipv6CidrBlockAssociationSet": [], "State": "pending", "DhcpOptionsId": "dopt-38f7a057", "CidrBlock": "10.0.0.0/16", "IsDefault": false } } aws ec2 create-internet-gateway --region us-east-1 Output: { "InternetGateway": { "Tags": [], "InternetGatewayId": "igw-c0a643a9", "Attachments": [] } } aws ec2 attach-internet-gateway --gateway-id <<IGW-ID>> --vpc-id <<VPC-ID>> --region us-east-1
- Create two subnets in N.Virginia Region VPC, one for CSR 1000v with CIDR 10.0.0.0/24 and another subnet with CIDR 10.0.1.0/24.
aws ec2 create-subnet --cidr-block 10.0.0.0/24 --vpc-id <<VPC-ID>> --region us-east-1 Output: { "Subnet": { "VpcId": "vpc-a01106c2", "AvailableIpAddressCount": 251, "MapPublicIpOnLaunch": false, "DefaultForAz": false, "Ipv6CidrBlockAssociationSet": [], "State": "pending", "AvailabilityZone": "us-east-1a", "SubnetId": "subnet-2c2de375", "CidrBlock": "10.0.0.0/24", "AssignIpv6AddressOnCreation": false } } aws ec2 create-subnet --cidr-block 10.0.1.0/24 --vpc-id <<VPC-ID>> --region us-east-1 Output: { "Subnet": { "VpcId": "vpc-a01106c2", "AvailableIpAddressCount": 251, "MapPublicIpOnLaunch": false, "DefaultForAz": false, "Ipv6CidrBlockAssociationSet": [], "State": "pending", "AvailabilityZone": "us-east-1b", "SubnetId": "subnet-2c2de375", "CidrBlock": "10.0.1.0/24", "AssignIpv6AddressOnCreation": false } }
- Create Route Table in N.Virginia VPC which will have the default route to Internet Gateway.And associate CSR subnet to it.
4. Launch the CSR 1000v from AWS MarketPlace with the one-click launch. Link To AWS Marketplace, you can ssh into the CSR 1000v instance using ec2-user.Attach Elastic IP to the CSR instance which will act as Customer Gateway in N.Virginia Region VPC. In later steps, we will configure the router to add Static routes to other subnets in VPC and setting BGP to propagate routes over VPN Connection with other VPC.
5. In a similar fashion create VPC in AWS Ohio region with CIDR 10.1.0.0/16 And create two subnets with CIDR 10.1.0.0/24 and 10.1.1.0/24
Steps to Create VPN connection in AWS Ohio VPC
- Create Customer Gateway. Open VPC management console at console.aws.amazon.com. In navigation pane choose Customer Gateway and then create new Customer Gateway. Enter Name, Routing type as Dynamic and EIP of the CSR 1000v instance in N.Viriginia Region VPC. ASN number is 16-bit and must be in the range of 64512 to 65534.
- Create VPG and attach to the VPC.In the Navigation Pane choose Virtual Private Gateway and create VPG.
- Now Create VPN connection. In Navigation Pane Choose VPN Connection, Create New VPN Connection. Enter the Name, VPG and Customer Gateway which we have created previously, select routing type as Dynamic and create VPN connection.
It will take few minutes to create VPN connection. When it is ready to download the configuration for Cisco CSR from the drop-down menu.
Steps to establish VPN Connection on CSR 1000v
- Add static routes of other subnets in VPC(N.Virginia) to CSR 1000v. Every subnet in AWS has a virtual router with IP address of Subnet CIDR +1. As CSR router will be in Subnet 10.0.0.0/24 the virtual router IP address will be 10.0.0.1. The Virtual Router on each subnet has a route to other all subnets in the VPC.
>Configure terminal #ip route 10.0.1.0 255.255.255.0 10.0.0.1
- Configure BGP. Choose the ASN number which you gave while creating Customer Gateway in Ohio VPC. Above we gave 64512
> Configure terminal (config)#router bgp 64512 (Config-router)# timers bgp keepalive holdtime (Config-router)# bgp log-neighbor-changes (Config-router)# end
This step might not be necessary. But as good practice, I have applied the above configuration before copying the configuration file that is downloaded before.
- Apply the Configurations that are downloaded previously when VPN Connections Created. After you have applied those setting on CSR you can see on the management console that both the tunnels of VPN as UP.
Testing to check connectivity between two VPC’s
- Launch an instance in subnet1 in Ohio region VPC’s with Public IPv4. SSH into the instance and ping the CSR 1000v instance private IP.
- Similarly, you can check connectivity with Ohio Region VPC by pinging the instance in subnet1 in Ohio region VPC with its Private IP.
Troubleshooting :
> Route Propagation must be added to the route table in Ohio Region VPC.
> You must configure CSR 1000v as NAT, so the subnets in N.Virginia region can access the hosts in Ohio region VPC via CSR 1000v. You need to Update the route table with target fo CSR 1000v instance-id after making it as NAT.
> Allow ICMP in Security groups on all instances.
Thanks and Regards
Naveen
AWS Solution Architect @CloudTern
Custom AMI with Custom hostname
I am using Amazon web services for a while now. And using it allowed me to have hands dirty on various services. In AWS AMI’s(Amazon Machine Image) provides the information like operating system, application server, and applications to launch into the virtual server(or can be called as an instance) in the cloud. There are lots of options for selecting AMIs provided by AWS or by the community. You can choose the preferred AMI that can meet your requirements. You can customize the instance that you have launched from the AMIs provided by AWS and can create your own AMI from that.All the AMIs created by you are private by default.
Interestingly the instances launched with Public AMIs in AWS comes with default user-name and no password authenticated which sometimes I don’t like. For example, Instances launched with Amazon Linux will have default user-name ec2-user and for Ubuntu instance default user-name is Ubuntu.
Instance launched with Public AMIs also does not allow you change the hostname on flight using user-data. Hostname for any instance launched with Public AMI looks something like
ip-<Private-IPv4>
Example: ip-172-1-20-201
So I have decided to create an AMI which will have default user as Naveen and password as *****. And I would like to have my instance named as myhostname.com i.e hostname. I will use a cloud config script to do that.
cloud-init is a multi-distribution package that handles early initialization of cloud instances.More information can be found at Cloud-Init. Some of the tasks performed by cloud-init are
- Set hostname
- Set the default Locale (default user)
- Generate host private ssh keys
- Parse and handle user-data
Custom AMI
For creating my Custom AMI with above-mentioned changes I have followed the below steps:
1. I have launched a t2.micro instance with Amazon Linux AMI ‘ami-4fffc834’. You can launch the instance using AWS management console or be using AWS command line(aws-cli). I have used the aws-cli to launch the instance.
aws ec2 run-instances --image-id ami-4fffc834 --count 1 --instance-type t2.micro --key-name Naveen
The above command will launch one t2.micro instance with the key name ‘Naveen’.
2. As I have launched the instance using Amazon Linux, the default user-name is ec2-user. Amazon Linux does setting default user using cloud-init. The configuration file for setting default user can be found in /etc/cloud/cloud.cfg.d/00_default.cfg. The config file looks something like below
system_info: # This will affect which distro class gets used distro: amazon distro_short: amzn # Default user name + that default users groups (if added/used) default_user: name: ec2-user lock_passwd: true gecos: EC2 Default User groups: [ wheel ] sudo: [ "ALL=(ALL) NOPASSWD:ALL" ] shell: /bin/bash # Other config here will be given to the distro class and/or path classes paths: cloud_dir: /var/lib/cloud/ templates_dir: /etc/cloud/templates/ upstart_dir: /etc/init/ package_mirrors: - arches: [ i386, x86_64 ] search: regional: - repo.%(ec2_region)s.%(services_domain)s - repo.%(ec2_region)s.amazonaws.com ssh_svcname: sshd
The 00_default.cfg contains other things as well but I have posted only the one which needed to be changed. As we can see the default username for this distro is ec2-user. lock_passwd: true means the user who is trying to log in with the username ec2-user is not allowed to authenticate using a password.
3. I have changed the user-name to Naveen and lock_passwd: false in the config file. But this config file does not allow entering the normal password as part of the config file. You need to give the password for the user in the hash. So to do that I have used the following commands in Ubuntu machine
# mkpasswd comes with whois package
sudo ap-get install whois
#To Generate hash using mkpasswd mkpasswd –method=SHA-512 #This will prompt to enter password #After entering password, mkpasswd will generate hash and output on console Ex: $6$G0Vu5qLWx4cZSHBx$0VYLSoIQxpLKVhlU.oBJdVSW7Ellswerdf.r/ZqWRuijiyTjPAXJzeGwYe1D/f94tt/tf
Copy the above-generated hash and add it to ‘passwd’ key in the above config file. After making final changes in the config file
system_info: # This will affect which distro class gets used distro: amazon distro_short: amzn # Default user name + that default users groups (if added/used) default_user: name: Naveen lock_passwd: false passwd: $6$G0Vu5qLWx4cZSHBx$0VYLSoIQxpLKVhlU.oBJdVSW7Elwerfwq.r/ZqWRuijiyTjPAXJzeGwYe1D/f94tt/tf1lXQYJtMtQLpvAqE1 gecos: Modified Default User name groups: [ wheel ] sudo: [ “ALL=(ALL:ALL) ALL” ] shell: /bin/bash # Other config here will be given to the distro class and/or path classes paths: cloud_dir: /var/lib/cloud/ templates_dir: /etc/cloud/templates/ upstart_dir: /etc/init/ package_mirrors: – arches: [ i386, x86_64 ] search: regional: – repo.%(ec2_region)s.%(services_domain)s – repo.%(ec2_region)s.amazonaws.com ssh_svcname: sshd
4. Finally, i have made the following changes in rc.local which will change the behavior of ssh service to accept password authentication. And change the preserve_hostname to false in /etc/cloud.cfg
if grep -Fxq “PasswordAuthentication no” /etc/ssh/sshd_config then sed -i ‘s/^PasswordAuthentication.*/PasswordAuthentication yes/’ /etc/ssh/sshd_config /etc/init.d/sshd restart fi
With these changes above I have achieved adding default user-name with Naveen and with the default password. With changes above to the instance above I have created an AMI from the instance using aws-cli
aws ec2 create-image --instance-id i-09ebf4e320b0cadca --name "ONE_AMI"
Output:
{
"ImageId": "ami-ebec0c91"
}
#Cloud-config for setting hostname
With the Customized i can launch the instance with user-name Naveen but still, the hostname will be in the format like IP-<Private-IPv4>. So I have used the below cloud-config script to change the hostname.
#cloud-config
#set the hostmachine name
fqdn: myhostname.com
#Add additional users for the machine
users:
- name: sysadmin
groups: [root,wheel]
passwd: $6$G0Vu5qLWx4cZSHBx$0VYLSoIQxpLKVhlU.oBJdVSW7EllsvFybq.r/ZqWRuijiyTjPAXJzeGwYe1D/f94tt/tf1lXQYJtMtQLpvAqE1
sudo: ALL=(ALL:ALL) ALL
#Final Message
final_message: "The system is finally up xcvxxxxxxxxxxxccccccccccccccccccccccc, after $UPTIME seconds"
The above script will create the instance with hostname myhostname.com and create a user sysadmin. The above script will be passed as part of user-data when launching an instance
aws ec2 run-instance --image-id ami-4240a138 --count 1 --intance-type t2.micro --user-data file://cloud.cfg
The above launch an instance without Key pair which means I can only log into the instance using the default user Naveen or using a username we have created in cloud configuration script that was passed a user-data.
Finally with this i have the instance with my custom default user-name and password, and a hostname with myhostname.com.
Path to the AWS Cloud
Introduction : Path to the AWS Cloud
You’ve heard of software as a service (SaaS), Infrastructure as a Service, Platform as a Service, There is XaaS to describe Anything as a Service. Now you can provide all of your company’s functions “as a Service” – Your Company as a Service (YCaaS). You will be more scalable, more available, more connected to employees and customers, as well as suppliers. Just hop on this cloud…
This blog is written to simplify your trip to the cloud. It is written as a general-purpose document and specific details will vary with your needs. This guide is written for migration to the AWS Cloud Platform. You will need an AWS account to begin this migration. The result will be a very flexible and highly available platform that will host services for internal or external use. Services may be turned up or discontinued, temporarily or permanently, very easily. Services may be scaled up or down automatically to meet demands. Because AWS services are billed as a service, computing services become operational rather than a capital expense (CAPEX).
The Framework
Exact needs will vary based on the services being migrated to the AWS Cloud. The benefits of a structured, reliable framework will transform your organization’s approach to planning and be offering online services. The AWS CAF (Cloud Adoption Framework) offers a structure for developing efficient and effective plans for cloud migration. With guidance and best practices available within that framework, you will build a comprehensive approach to cloud computing across your organization.
Planning
Using the Framework (AWS CAF) to break down complicated plans into simple areas of focus, will speed the migration and improve success. People, process, and technology are represented at the top level. The components of the focus areas include:
- Value (ROI)
- People (Roles)
- Priority and Control
- Applications and Infrastructure
- Risk and Compliance
- Operations
Value, or Return on Investment, measure the monetary impact on your business. For customer facing services, this could mean reaching more customers faster. Customer engagement and meaningful transactions. For internal services, ease of access and pertinence of content adds value.
People occupy many roles. Organizationally, internal stakeholders will need to be involved in decision making and in ongoing support. Business applications’ stakeholders have outcomes which they own in the planning stages and in the long term utilization. The content provider will have initial and ongoing responsibilities. The end user is dependent on the platform and the other stakeholders.
Priority and control of the service are defined with the resources dedicated to the service migration and allowable disruption. Priorities are affected by readiness. New services are often easier to migrate due to the compatibility of platforms. These may be migrated quickly ahead of more cumbersome services. Mission critical services will require the resources and special attention that goes with critical status.
Risk and compliance are defined by the category of the usage of the service. Commerce with external entities will demand PCI compliance. Personal information of internal entities will demand HIPPA compliance. CRM and general information will need copyright identification.
Operations are involved in the migration phase as the process of service migration affects business operations. Because migration is not a day to day business process, it will require its own resources, planning, and priorities. These priorities affect the resources available for the migration. A fast migration may require more resources, people, bandwidth, communications. Lower priority allows for fewer resources and, typically, less disruption.
Migration process
Migration is a process that will ride on top of the normal business process. In order to successfully migrate to the cloud, all of these considerations will affect planning. Given priorities that are decided upon, identify the people and roles that will be involved in the migration. Communicate the specific outcomes the team will be responsible for. Be specific, gain agreement and ownership. Deliver the resources that the team identifies as needed to meet goals. This includes time. If the team has to be away from normal day to day responsibilities business process must be temporarily re-routed. This will involve support teams one level removed from the migration.
Outsourced teams can provide temporary resources in highly specialized roles to reduce the impact on business operations. Do the initial planning to determine your needs. Choose an outsourced team based on experience in the specific roles you will need to fill. Integrate the imported resources with appropriate internal team members. Give ownership to the internal team and empower them to act when needs arise.
Construct the entire migration model before beginning the process. Build the budget and prepare for the impact of resource dedication up front. Measure progress against the model on weekly basis. Communicate to the team that adjustments will be needed, and communication is the way these adjustments are dealt with. Remember the butterfly effect: every change will result in cascading consequences. With reliable communications, everyone will be more comfortable with the temporary effects of this over the top process.
When the team and their roles are communicated, the non-human resources can be quantified. How much bandwidth will be required to meet identified goals? Is the network capable of delivering on the required bandwidth, or will infrastructure need to be upgraded? Consider the impact on infrastructure on critical business services that may occur during the migration. Be prepared for contingencies and unexpected demands.
If network augmentation is required, how deep into your infrastructure will you need to adjust. As data migration paths are identified and bandwidth is dedicated, will other segments of the network be affected? These network augmentations have power and space impacts. Downstream, there will be additional people affected as configurations and replacement equipment are implemented.
Peak demand capacity is often a separate planning impact. Peak busy hours will result in oversubscription of available bandwidth. With oversubscription, will come service impact. The impact is easily underestimated because saturation will lengthen the impact duration. Along with the capacity planning, there needs to be service level consideration. What tolerance to latency will the user base have?
Availability planning during migration will determine impact in the event of the disaster. Business continuity plans may need to be modified during the migration period. Existing failover functions will not include the migration paths. If not addressed in advance, an unplanned outage will disrupt your migration and likely have a negative business impact. Whatever availabilities are associated with your services which are migrating will need planning for the migration.
The cost of maintaining duplicate services during migration include licensing. When two systems are running simultaneously, the license expense is double. Depending on demand, and with planning, some efficiencies may keep this cost under the maximum. While this may be an opportunity to eliminate some marginally needed or legacy expenses.
In the long run, you will reap the rewards. Savings include the server maintenance, break-fix, and upgrades, backups, both local and off-site, environmental conditioning maintenance, power savings. People time involved with the maintenance, break-fix, upgrades, and the bill paying for these services. Importantly, scalability in the AWS cloud does not require as much advanced planning, over capacity implementation and over provisioning for future expansion. Capacity can be reduced on the fly as well.
The total return on investment will include a cost increase during planning and migration and long-term savings due to increased efficiencies and cost reductions. The total cost of ownership grows over time, but will not include associated direct and indirect costs. Intangible return is in technology upgrades. The obsoleting of capital investments will greatly decrease. Technology will evolve and be implemented invisibly for immediate use in the cloud platform.
Contributors
William