Migrating Gitlab or Github git repo to AWS CodeCommit

AWS CodeCommit is a git repo service. It is generally cheaper than other “pro” options provided by many comapnies. But this document is generally applicable to any git migration. You need to clone the source repo, add a new remote and push the changes.

Steps are given below

SSH Key

First , create a ssh key pair if you have not done so . If you do not have one , use the ssh-keygen command to generate a public and private key. ( For windows users, install git for windows client and then run the commands in the bash terminal that comes with it )

Upload your ssh public key to your user settings area in github or gitlab.

Clone the source Repo

git clone --mirror git@gitlab.com/blah/myproject.git

or

git clone --mirror git@github.com/myproject.git

or any other URL you might have.

This creates a myproject folder in your computer . This is what we are going to push to CodeCommit repo. Remember this is a bare repo just just like a snapshot that is going to AWS code commit repo. It will have all branches and commit history till now. So, make no changes any further in your original git repo.

Prepare AWS account

Account Settings

Login to AWS IAM area, find and click your username under “Users”. Go to “Security Credentials” tab. Find the button called “upload ssh key” and upload your ssh public key in there. After you add it, the screen shows the SSH Key as added and a ID associated with it. Copy the ID . It looks like AKWDCSFF123123SFDF .

Create a ~/.ssh/config file on your machine. Contents should look like below. Match your SSH ID and private key. Now your machine is ready to work with CodeCommit repo.

Host git-codecommit.*.amazonaws.com
    User APKAEBLAHEXAMPLE
    IdentityFile ~/.ssh/id_rsa

Create a CodeCommit Repo

Go to AWS CodeCommit and Click on Create Repository. Provide a name and click create.

Prepare and push the repo to CodeCommit

Get the repo URL from AWS CodeCommit as given in the below image.

Now run the following commands

cd myproject.git
git remote add aws ssh://git-codecommit.<yourregion>.amazonaws.com/v1/repos/<reponame>
git push --all aws

You should now be able to open the repo in CodeCommit UI in AWS, and see all your branches and commits are there.

Remove the myproject.git folder and do a fresh git clone from the CodeCommit . You can use this repo to do your future work.

If you used Wikis feature in gitlab, then you can use the “checkout” option on the wiki page which is yet another git repo. I recommend creating a seperate repo in AWS CodeCommit and do the same process above to save your Wiki pages as well.

Securing HTTP Methods in AWS ALB

Many times people would just open all traffic on ALB and pass it on to the application. This is a security issue. If you want to just allow GET, HEAD and OPTIONS .. and not others like POST or DELETE to your site, it is better to do that in the Application Load Balancer’s Listener Rules.

Here I am using the Cloudformation template ( YAML ) to block everything with a HTTP code 405 on the Default actions and then adding a Custom rule to allow only GET , HEAD and OPTIONS .

   HTTPSListener:
    Type: 'AWS::ElasticLoadBalancingV2::Listener' 
    Properties:
      Certificates:
        - CertificateArn: !Ref SSLARN
      SslPolicy: ELBSecurityPolicy-TLS-1-2-2017-01
      DefaultActions:
        - Type: fixed-response
          FixedResponseConfig:
            StatusCode: 405
            ContentType: "text/plain"
            MessageBody: "Invalid Request."
      LoadBalancerArn: !Ref LoadBalancer
      Port: 443
      Protocol: HTTPS
  HTTPSFilter1:
    Type: AWS::ElasticLoadBalancingV2::ListenerRule
    Properties:
      Actions:
        - Type: forward
          TargetGroupArn: !Ref TargetGrp
      Conditions:
        - Field: http-request-method
          HttpRequestMethodConfig:
            Values:
              - GET
              - HEAD
              - OPTIONS
      ListenerArn: !Ref HTTPSListener
      Priority: 1

You could do the same in the UI, by going to EC2 -> Loadbalancers. Select your ALB, then go to Listeners tab, click on view/edit rules. Then add rules accordingly. Remember to change the default rule to Deny all.

Setting up DNS Server in CentOS 7

DNS service is used for translating hostnames to IP addresses and vice versa. When your computer needs to communicate with google.com , it asks the DNS server what IP is google.com. DNS server looks at it’s database or any parent DNS servers for that information and replies to your computer. CentOS uses ‘bind‘ package for running DNS server. For querying servers for DNS information, we use commands like host, nslookup or dig

In this article I would like you guide you on how to set up a basic DNS server for your own network using CentOS 7 and bind. A few things we need to keep in mind.

  • DNS server IP  : 10.0.0.1
  • Domain : example.com
  • Network : 10.0.0.0/24
  • Hostname of the DNS server: core ( hence core.example.com )

Configure a static IP for the server, or make sure the DHCP will assign the same IP always for this server. We start by installing packages.

yum install bind bind-utils
firewall-cmd --permanent --add-port 53/udp
firewall-cmd --permanent --add-port 53/tcp
firewall-cmd reload
systemctl enable named.service
systemctl start named.service

Now, we need to configure bind service to start serving example.com domain and it’s DNS entries. First, we need to create a forward zone and a reverse zone for example.com domain. The simplest way to explain,  Forward zone serves name to IP conversion, such as A , CNAME , MX records. Reverse zone serves IP to name lookups for PTR records. We need to add the following entries in /etc/named.conf

options {
        listen-on port 53 { 127.0.0.1; 10.0.0.1;};
        listen-on-v6 port 53 { ::1; };
        directory       "/var/named";
        dump-file       "/var/named/data/cache_dump.db";
        statistics-file "/var/named/data/named_stats.txt";
        memstatistics-file "/var/named/data/named_mem_stats.txt";
        allow-query     { localhost; 10.0.0.0/24;};
        recursion yes;
        dnssec-enable yes;
        dnssec-validation yes;
        dnssec-lookaside auto;
        /* Using Google DNS to query for DNS requests outside example.com */
        forwarders {
                8.8.8.8;
                8.8.4.4;
        };
        bindkeys-file "/etc/named.iscdlv.key";
        managed-keys-directory "/var/named/dynamic";
        pid-file "/run/named/named.pid";
        session-keyfile "/run/named/session.key";
};

logging {
        channel default_debug {
                file "data/named.run";
                severity dynamic;
        };
};

zone "." IN {
        type hint;
        file "named.ca";
};

/* Forward Zone */
zone "example.com" IN {
type master;
file "example.com.forward";
allow-update { none; };
};

/* Reverse Zone */
zone "0.0.10.in-addr.arpa" IN {
type master;
file "example.com.reverse";
allow-update { none; };
};

include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";

We have configured our DNS server to forward requests not in example.com towards external DNS servers ( 8.8.8.8 and 8.8.4.4 ) to serve the clients. This is called DNS forwarding. Our client nodes need to be configured with only our DNS server. This also improves the network performance because DNS server caches responses so clients will get faster replies.

As you can see in line number 39 and 45 , we have defined out forward and reverse zone to be served from files named example.com.forward and example.com.reverse.  These need to be created inside /var/named/ folder.

$TTL 86400
$ORIGIN example.com.
@   IN  SOA     core.example.com. webmaster.example.com. (
        100 ; Serial
        3000        ; Refresh
        3600        ; Retry
        3W      ; Expire
        86400  )     ; Minimum TTL

@       IN  NS          core.example.com.
core       IN  A   10.0.0.1
$TTL 86400
$ORIGIN 0.0.10.in-addr.arpa.
@   IN  SOA     core.example.com. webmaster.example.com.  (
        2015061001  ;Serial
        3600        ;Refresh
        1800        ;Retry
        604800      ;Expire
        86400       ;Minimum TTL
)
@       IN  NS          core.example.com.
@       IN  PTR         example.com.
core       IN  A   10.0.0.1
1     IN  PTR         core.example.com.

Now we can restart the service and test to see if it is working fine.

[root@cos ~]# systemctl restart named.service
[root@cos ~]# host core.example.com 10.0.0.1
Using domain server:
Name: 10.0.0.1
Address: 10.0.0.1#53
Aliases: 

core.example.com has address 10.0.0.1
[root@cos ~]# host 10.0.0.1 10.0.0.1
Using domain server:
Name: 10.0.0.1
Address: 10.0.0.1#53
Aliases: 

1.0.0.10.in-addr.arpa domain name pointer core.example.com.
[root@cos ~]# 

Let’s try adding a few more host entries, so you can add more servers to your network and have fully qualified domain names. I am also adding the host server.example.com responsible for handling mail for example.com ( MX record ). That same host will also server www.example.com web content for the domain. Append the following lines to /var/named/example.com.forward file.

           IN MX  10  mail.example.com.
server    IN  A   10.0.0.102
www     IN CNAME  server
mail    IN CNAME  server

client          IN  A   10.0.0.103

We should also add PTR records in the reverse zone file.

server    IN  A   10.0.0.102
client          IN  A   10.0.0.103
102     IN  PTR         server.example.com.
103     IN  PTR         client.example.com.

Now, restart the service and test to see if our new hosts are being served.

[root@cos ~]# systemctl restart named.service
[root@cos ~]# host www 10.0.0.1
Using domain server:
Name: 10.0.0.1
Address: 10.0.0.1#53
Aliases: 

www.example.com is an alias for server.example.com.
server.example.com has address 10.0.0.102
[root@cos ~]# host -t MX mail.example.com 10.0.0.1
Using domain server:
Name: 10.0.0.1
Address: 10.0.0.1#53
Aliases: 

mail.example.com is an alias for server.example.com.
[root@cos ~]# host here.com 10.0.0.1
Using domain server:
Name: 10.0.0.1
Address: 10.0.0.1#53
Aliases: 

here.com has address 131.228.152.4
here.com mail is handled by 10 here-com.mail.protection.outlook.com.
[root@cos ~]# 

As you can see, you now have a fully functioning DNS server. All you need to do is to configure your machines in the 10.0.0.0/24 network to use 10.0.0.1 as their DNS server.  🙂

This is very nice thing to have , if you are learning things like mail servers or virtual hosting for web services etc. This would be perfectly fine to be used inside a LAB/Testing network, using this in production/live scenario is not recommended (it is too basic).

Note: This setting can be easily pushed to all nodes in the network via DHCP if they are all DHCP clients. I will explain configuring your own DHCP server in another article.

 

Getting Started with Docker

Docker logo horizontal spacing

Docker Allows you to run tiny containers inside your linux machine . These containers can be compared to Virtual Machines, but there are some differences that makes them easy and portable. In a user perspective, they are damn fast to start (and kill) compared to VMs and also very portable. I am trying to describe how to install and run containers with Docker on a ubuntu 14 ( trusty) machine.

Installation

Installation is fairly easy with the follwing command. It will ask your sudo password when needed.

wget -qO- https://get.docker.com/ | sh

It adds a apt repo and does the installation of the latest docker version on your machine. Verify the installation by running ‘docker -v’ command. It will show you the version of docker installed.

Pull and Run

Now, we need to pull a basic container and run it to see the magic.

$docker pull ubuntu:14.04
14.04: Pulling from ubuntu
e9e06b06e14c: Extracting [=======================> ] 62.95 MB/65.77 MB
a82efea989f9: Download complete 
37bea4ee0c81: Download complete 
......
$docker run -ti ubuntu:14.04 /bin/bash
root@c419ad172b0f:/#

What has happened just now is that you have downloaded the latest docker ubuntu 14.04 image from the docker registry and ran it. The ‘root@c419ad172b0f#’ prompt you see is the container running the bash shell. You can now work on it as you would in any other normal ubuntu machine, when you exit the shell, it would stop the container as well.  Enjoy !

Maintenance

Everytime you exit a container, it does not actually delete it. It is left in the filesystem so that you can restart it if needed. Let’s take a look at how we can see the images we pulled, and see the containers we started and also how to work with them.

$docker images
REPOSITORY                                           TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
ubuntu                                               14.04               07f8e8c5e660        2 weeks ago         188.3 MB

As you can see, it lists all images you have downloaded from internet. To remove a image , run ‘docker rmi ubuntu:14.04’

$docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c419ad172b0f ubuntu:14.04 "/bin/bash" 24 minutes ago Exited (0) 2 minutes ago hopeful_thing

This is the container that just got stopped in the previous step. Notice the CONTAINER_ID c419ad172b0f is the same as the hostname of the container we had when we ran the first container. Each time you start a container , it creates a unique ID . You can start a stopped container and attach shell to it using the commands below.

$docker start c419ad172b0f
c419ad172b0f
$docker attach c419ad172b0f
root@c419ad172b0f:/# echo Hello
Hello
root@c419ad172b0f:/# exit
exit
$

Now let’s take a look at how to remove a container.

$docker rm c419ad172b0f

Customized Images

You can also create customized images for your use. For example, I want to have a image that includes ‘screen’ package on top of the basic ubuntu image.  It is easy to prepare a ‘Dockerfile’ and let docker create (build) a new image for you. Create a file named ‘Dockerfile’ inside a empty folder. Content is given below.

FROM ubuntu:14.04
MAINTAINER ksraju007@gmail.com

ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update ; apt-get -y install screen ;
CMD "/bin/bash"

Now build the image. Notice that we are giving it a new name called u14screen using the -t option.

docker build -t u14screen --rm .
Sending build context to Docker daemon 2.048 kB
Sending build context to Docker daemon
Step 0 : FROM ubuntu:14.04 
 ---> 07f8e8c5e660
Step 1 : MAINTAINER ksraju007@gmail.com
 ---> Using cache
 ---> 52fe481915f0
Step 2 : ENV DEBIAN_FRONTEND noninteractive
 ---> Running in 7d802e182f91
 ---> ffd0cd910a16
Removing intermediate container 7d802e182f91
Step 3 : RUN apt-get update ; apt-get -y install screen ;
 ---> Running in 6574afc3a441
....
Processing triggers for ureadahead (0.100.0-16) ...
 ---> 76433af5559b
Removing intermediate container 6574afc3a441
Step 4 : CMD "/bin/bash"
 ---> Running in 8933a2b86e12
 ---> ce468a62539a
Removing intermediate container 8933a2b86e12
Successfully built ce468a62539a
$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
u14screen latest ce468a62539a About a minute ago 210.9 MB

Hurray ! You have just built your own docker image. Try running it.

$docker run -ti u14screen
root@95d2c547e113:/# screen -v
Screen version 4.01.00devel (GNU) 2-May-06
root@95d2c547e113:/# exit
exit
$

Since we have mentioned the CMD parameter in Dockerfile to start /bin/bash automatically, you do not need to specify that anymore during ‘docker run’ for your image. For more information about advanced docker options and Dockerfile parameters, see https://docs.docker.com/

The best thing about having Dockerfile is that now you can send the Dockerfile to your friends and they can build exactly the same image on their computer too. This makes it extremely easy to replicate work environments across people, not to mention that you are not copying lots of data too.. it is a simple text file.

Enjoy your docker experiments !  🙂

Disable Key Confirmation for SSH

When you connect using ssh for the first time to a machine, you will be asked with such a prompt :

The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is 73:8c:9a:44:c1:5a:e1:9d:20:f1:12:2a:42:da:0f:6f.
Are you sure you want to continue connecting (yes/no)?

This is a security feature of SSH protocol to make sure the identity of the machine is verified before you actually connect to it. Subsequent connections to the host will not ask for this confirmation as long as the identity matches.

But it is also a inconvenience to type “yes” , specially when you are working with new nodes ( clones in a cloud platform , for example ) . There are a few things you can do to bypass this. This is specially useful when you are working with scripts.

Disable it for only one time

When you want to disable the prompt only temporarily for just one command, use the command line option as below.

ssh -o StrictHostKeyChecking=no target.host.ip

Make it permanent

If you would like to make this change permanent, you should also consider getting rid of the known_hosts file , which will otherwise keep complaining about mismatches if they already have entries there. Add these entries in the `~/.ssh/config` file.

Host *
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null

Notes

These will save you from the trouble of type “yes” and also changes in host keys breaking your scripts. But remember, this makes you vulnerable to attacks. So, use it in internal, safe environments.

Defining an alias for such a command would be handy. So you can use it when needed. For bash, you add this line to `~/.bashrc`

alias ssho ssh -o '"-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"'

After that, you can use the `ssho` command to connect to such hosts or inside your scripts.

Enjoy ! 🙂