1) Explain how “Infrastructure of code” is processed or executed in AWS?
- The code for infrastructure will be in simple JSON format
- This JSON code will be organized into files called templates
- This templates can be deployed on AWS and then managed as stacks
- Later the CloudFormation service will do the Creating, deleting, updating, etc. operation in the stack
2) Mention what are the key aspects or principle behind DevOps?
The key aspects or principle behind DevOps is
- Infrastructure as code
- Continuous deployment
3) What are the core operations of DevOps with application development and with infrastructure?
The core operations of DevOps with
- Code building
- Code coverage
- Unit testing
4) What is version control and why should VCS be used?
Define version control and talk about how this system records any changes made to one or more files and saves them in a centralized repository. VCS tools will help you recall previous versions and perform the following:
- Go through the changes made over a period of time and check what works versus what doesn’t.
- Revert specific files or specific projects back to an older version.
- Examine issues or errors that have occurred due to a particular change.
Using VCS gives developers the flexibility to simultaneously work on a particular file and all modifications can be logically combined later.
5) Why are configuration management processes and tools important?
A6. Talk about multiple software builds, releases, revisions, and versions for each software or testware that is being developed. Move on to explain the need for storing and maintaining data, keeping track of development builds and simplified troubleshooting. Don’t forget to mention the key CM tools that can be used to achieve these objectives. Talk about how tools like Puppet, Ansible, and Chef help in automating software deployment and configuration on several servers.
6) How is IaC implemented using AWS?
A10. Start by talking about the age-old mechanisms of writing commands onto script files and testing them in a separate environment before deployment and how this approach is being replaced by IaC. Similar to the codes written for other services, with the help of AWS, IaC allows developers to write, test, and maintain infrastructure entities in a descriptive manner, using formats such as JSON or YAML. This enables easier development and faster deployment of infrastructure changes.
7) Explain what is Memcached?
Memcached is a free and open source, high-performance, distributed memory object caching system. The primary objective of Memcached is to enhance the response time for data that can otherwise be recovered or constructed from some other source or database. It is used to avoid the need to operate SQL data base or another source repetitively to fetch data for concurrent request.
Memcached can be used for
- Social Networking-> Profile Caching
• Content Aggregation -> HTML/ Page Caching
• Ad targeting -> Cookie/profile tracking
• Relationship -> Session caching
• E-commerce -> Session and HTML caching
• Location-based services -> Data-base query scaling
• Gaming and entertainment -> Session caching
Memcache helps in
- Speed up application processes
• It determines what to store and what not to
• Reduce the number of retrieval requests to the database
• Cuts down the I/O ( Input/Output) access (hard disk)
Drawback of Memcached is
- It is not a persistent data store
• Not a database
• It is not an application specific
• It cannot cache large object
8) Mention some important features of Memcached?
Important features of Memcached includes
- CAS Tokens: A CAS token is attached to any object retrieved from cache. You can use that token to save your updated object.
• Callbacks: It simplifies the code
• getDelayed: It reduces the delay time of your script which is waiting for results to come back from server
• Binary protocol: You can use binary protocol instead of ASCII with the newer client
• Igbinary: Previously, client always used to do serialization of the value with complex data, but with Memcached you can use igbinary option.
9) Explain whether it is possible to share a single instance of a Memcache between multiple projects?
Yes, it is possible to share a single instance of Memcache between multiple projects. Memcache is a memory store space, and you can run memcache on one or more servers. You can also configure your client to speak to a particular set of instances. So, you can run two different Memcache processes on the same host and yet they are completely independent. Unless, if you have partitioned your data, then it becomes necessary to know from which instance to get the data from or to put into.
10) Explain how you can minimize the Memcached server outages?
- When one instance fails, several of them goes down, this will put larger load on the database server when lost data is reloaded as client make a request. To avoid this, if your code has been written to minimize cache stampedes then it will leave a minimal impact
• Another way is to bring up an instance of Memcached on a new machine using the lost machines IP address
• Code is another option to minimize server outages as it gives you the liberty to change the Memcached server list with minimal work
• Setting timeout value is another option that some Memcached clients implement for Memcached server outage. When your Memcached server goes down, the client will keep trying to send a request till the time-out limit is reached
11) How do you setup a script to run every time a repository receives new commits through push?
There are three ways to configure a script to run every time a repository receives new commits through push, one needs to define either a pre-receive, update, or a post-receive hook depending on when exactly the script needs to be triggered.
- Pre-receive hook in the destination repository is invoked when commits are pushed to it. Any script bound to this hook will be executed before any references are updated. This is a useful hook to run scripts that help enforce development policies.
- Update hook works in a similar manner to pre-receive hook, and is also triggered before any updates are actually made. However, the update hook is called once for every commit that has been pushed to the destination repository.
- Finally, post-receive hook in the repository is invoked after the updates have been accepted into the destination repository. This is an ideal place to configure simple deployment scripts, invoke some continuous integration systems, dispatch notification emails to repository maintainers, etc.
Hooks are local to every Git repository and are not versioned. Scripts can either be created within the hooks directory inside the “.git” directory, or they can be created elsewhere and links to those scripts can be placed within the directory.
12) What is Git bisect? How can you use it to determine the source of a (regression) bug?
I will suggest you to first give a small definition of Git bisect, Git bisect is used to find the commit that introduced a bug by using binary search. Command for Git bisect is
git bisect <subcommand> <options>
Now since you have mentioned the command above, explain what this command will do, This command uses a binary search algorithm to find which commit in your project’s history introduced a bug. You use it by first telling it a “bad” commit that is known to contain the bug, and a “good” commit that is known to be before the bug was introduced. Then Git bisect picks a commit between those two endpoints and asks you whether the selected commit is “good” or “bad”. It continues narrowing down the range until it finds the exact commit that introduced the change.
13) Describe branching strategies you have used.
This question is asked to test your branching experience so tell them about how you have used branching in your previous job and what purpose does it serves, you can refer the below points:
- Feature branching
A feature branch model keeps all of the changes for a particular feature inside of a branch. When the feature is fully tested and validated by automated tests, the branch is then merged into master.
- Task branching
In this model each task is implemented on its own branch with the task key included in the branch name. It is easy to see which code implements which task, just look for the task key in the branch name.
- Release branching
Once the develop branch has acquired enough features for a release, you can clone that branch to form a Release branch. Creating this branch starts the next release cycle, so no new features can be added after this point, only bug fixes, documentation generation, and other release-oriented tasks should go in this branch. Once it is ready to ship, the release gets merged into master and tagged with a version number. In addition, it should be merged back into develop branch, which may have progressed since the release was initiated.
In the end tell them that branching strategies varies from one organization to another, so I know basic branching operations like delete, merge, checking out a branch etc.
14) Why do you need a Continuous Integration of Dev & Testing?
For this answer, you should focus on the need of Continuous Integration. My suggestion would be to mention the below explanation in your answer:
Continuous Integration of Dev and Testing improves the quality of software, and reduces the time taken to deliver it, by replacing the traditional practice of testing after completing all development. It allows Dev team to easily detect and locate problems early because developers need to integrate code into a shared repository several times a day (more frequently). Each check-in is then automatically tested.
15) Explain how you can setup Jenkins job?
My approach to this answer will be to first mention how to create Jenkins job. Go to Jenkins top page, select “New Job”, then choose “Build a free-style software project”.
Then you can tell the elements of this freestyle job:
- Optional SCM, such as CVS or Subversion where your source code resides.
- Optional triggers to control when Jenkins will perform builds.
- Some sort of build script that performs the build (ant, maven, shell script, batch file, etc.) where the real work happens.
- Optional steps to collect information out of the build, such as archiving the artifacts and/or recording javadoc and test results.
- Optional steps to notify other people/systems with the build result, such as sending e-mails, IMs, updating issue tracker, etc..
More: Interview Questions
16) Mention some of the useful plugins in Jenkins.
- Maven 2 project
- Amazon EC2
- HTML publisher
- Copy artifact
- Green Balls
These Plugins, I feel are the most useful plugins. If you want to include any other Plugin that is not mentioned above, you can add them as well. But, make sure you first mention the above stated plugins and then add your own.
17) How will you secure Jenkins?
The way I secure Jenkins is mentioned below. If you have any other way of doing it, please mention it in the comments section below:
- Ensure global security is on.
- Ensure that Jenkins is integrated with my company’s user directory with appropriate plugin.
- Ensure that matrix/Project matrix is enabled to fine tune access.
- Automate the process of setting rights/privileges in Jenkins with custom version controlled script.
- Limit physical access to Jenkins data/folders.
- Periodically run security audits on same.
18) Explain how you can move or copy Jenkins from one server to another?
- Move a job from one installation of Jenkins to another by simply copying the corresponding job directory.
- Make a copy of an existing job by making a clone of a job directory by a different name.
- Rename an existing job by renaming a directory. Note that if you change a job name you will need to change any other job that tries to call the renamed job.
19) How to automate Testing in DevOps lifecycle?
In DevOps, developers are required to commit all the changes made in the source code to a shared repository. Continuous Integration tools like Jenkins will pull the code from this shared repository every time a change is made in the code and deploy it for Continuous Testing that is done by tools like Selenium as shown in the below diagram.
In this way, any change in the code is continuously tested unlike the traditional approach.
20) How does Nagios works?
Nagios runs on a server, usually as a daemon or service. Nagios periodically runs plugins residing on the same server, they contact hosts or servers on your network or on the internet. One can view the status information using the web interface. You can also receive email or SMS notifications if something happens.
The Nagios daemon behaves like a scheduler that runs certain scripts at certain moments. It stores the results of those scripts and will run other scripts if these results change.
Now expect a few questions on Nagios components like Plugins, NRPE etc..