Ansible on IOS to loop through subset of interfaces

Running WISL on Windows 10 and Ubuntu with ansible 2.9.9. I am new to Ansible. I execute a show on a Cisco device to yield the interfaces on which a given network protocol runs. I then want to extract the interfaces and execute commands on them. In this case to turn off the protocol. Ideally the show command could change easily. As for many protocols this is the consistent way I would check this state. There may be ways Ansible stores this protocol information. Maybe with facts? I found examples using ios_config at https://docs.ansible.com/ansible/latest/modules/ios_config_module.html but the interfaces are hard coded as shown here with the helper example:

- name: configure ip helpers on multiple interfaces
  ios_config:
    lines:
      - ip helper-address 172.26.1.10
      - ip helper-address 172.26.3.8
    parents: "{{ item }}"
  with_items:
    - interface Ethernet1
    - interface Ethernet2
    - interface GigabitEthernet1

My attempt is as follows which gives me the two interfaces with multicast active. But what next to act on those interfaces in a loop? :

  tasks:
  - name: Gather interfaces running PIM
    ios_command:
      commands:
        - show ip pim interface
    register: pim

  - name: Write PIM interface data to file
    copy:
      content: "{{pim.stdout_lines[0]}}"
      dest: "backups/{{ansible_alias}}-pim-interfaces.txt"


  - name: Glean PIM INTF's
    shell: cat backups/{{ ansible_alias }}-pim-interfaces.txt | tr ' ' 'n' | grep 'GigabitEthernet'
    register: pim

  - debug: msg='{{ pim.stdout_lines }}'


TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [10.239.121.2] => {
    "msg": [
        "GigabitEthernet0/0/0",
        "GigabitEthernet0/0/1.125"
    ]
}

Many thanks for any guidance.

Go to Source
Author: IainT

AWX cannot find matching host

I have on premise AWX and inventory (instances on AWS). To provision newly launched instance I want to use provisioning callbacks.
But when executing this command from the instance

$ curl -v -H 'Content-Type:application/json' -d '{"host_config_key": "key1"}' https://my-host.com/api/v2/job_templates/28/callback/

an error occurs

{"msg":"No matching host could be found!"}

What might be wrong with AWX configuration if host is present within the dynamic inventory and UPDATE ON LAUNCH flag is enabled?

Go to Source
Author: Most Wanted

Are there serious problems with an EC2 auto-scaling AMI that automatically downloads newest server from git?

I’m converting some servers from manual scaling (with a load balancer) to auto-scaling. I’m currently using an Ansible script to manage upgrades, and I wouldn’t want an automatically created instance to not be using the newest version of the software. I created an AMI that will download the newest commit from a chosen branch on first boot.

I’ve since realized my approach is somewhat like “cowboy coding”, since AWS provides a mechanism to replace auto-scaling instances when there should be an update. (If using that, my existing update script would become obsolete, and I guess the update would entail creating a new AMI containing the new server version.)

Are there any serious problems with using the “cowboy” approach? I realize auto-created servers might end up with newer code than the other servers (if someone pushes but does not deploy their code changes). I wonder whether auto-scaling will cause Ansible scripts to fail if servers are terminated while being processed by Ansible. What have I missed?

Go to Source
Author: piojo

haproxy container not responding that i have created with remote machine ansible playbook although container is running

I have alredy created ansible playbook for one haproxy container and 2 apache container. Moreover I also mapped ports and volumes in order to distribute client request with haproxy to apache servers. but haproxy server not responding. containers were created and mapped. . . here is my playbook .—
.

  • hosts: all
    vars:
    img:
    – httpd:latest
    – haproxy:latest
    tasks:

            - name: pull image
              docker_image:
                  name: "{{ item }}"
                  source: pull
              loop: "{{ img }}"
            - name: copy my contain1
              copy:
                      src: ./index1.html
                      dest: /home/roy/volume/index.html
              notify: apache1
            - name: copy my contain2
              copy:
                      src: ./index2.html
                      dest: /home/roy/volume1/index.html
              notify: apache2
            - name: create containers
              docker_container:
                 name: haproxycon1
                 image: haproxy:latest
                 volumes:
                         - /home/roy/ha/:/usr/local/etc/haproxy/
                 state: stopped
                 ports:
                         - "80:80"
                 command: sleep 1d
              notify: pop
              register: haproxycon1
              ignore_errors: yes
            - name: create apache1 container
              docker_container:
                  name:  con1
                  image: httpd:latest
                  state: started
                  volumes:
                          - /home/roy/volume:/usr/local/apache2/htdocs
                  exposed_ports:
                          - 80
              register: con1
              ignore_errors: yes
            - name: create apache2
              docker_container:
                      name: con2
                      image: httpd:latest
                      state: started
                      volumes:
                              - /home/roy/volume1:/usr/local/apache2/htdocs
                      exposed_ports:
                              - 80
              register: con2
              ignore_errors: yes
            - name: haproxyconf
              template:
                     src: ./haproxy.cfg
                     dest: /home/roy/ha/haproxy.cfg
              notify: pop
    

    handlers:
    – name: pop
    docker_container:
    name: haproxycon1
    state: started
    – name: apache1
    docker_container:
    name: con1
    state: started
    – name: apache2
    docker_container:
    name: con2
    state: started

Go to Source
Author: KKSpro

How to automate the key exchange in WireGuard when you deploy a cluster of machines?

Let’s assume you want to deploy a cluster of machines on Hetzer Cloud. For simplicity let’s call them worker1, worker2, worker3. They need to communicate with a server called master, which will be running on different account then the workers. Ideally, the whole setup should not be open to the internet. Unfortunately, Hetzner supports only private networks within the same account.

To make it work, you can setup your own VPN using WireGuard. Conceptually, it is not hard. You need to setup three connections (between the master and each worker). The tricky part is how to automate the key exchange. Ideally, it should not be more work if you deploy additional workers (e.g. 100 instead 3 workers).

Setting up such a VPN cluster sounds like a common problem, but I cannot find any recommendations on how to setup 1-to-n or n-to-m connections, only tutorials on how to peer two machines. I’m thinking of automating the key exchange with Ansible (generate keys, gather them, install them on the master), but wanted to check first whether there is an easier solution to the problem that I missed.

In SSH, workers could share their key, which would simplify the problem. In WireGuard, keys cannot be shared, as far as I understood. How would you automate the setup of a VPN with WireGuard, so each worker can reach the master? Or is WireGuard the wrong choice for the problem?

Clarification:

  • In my scenario, it is not possible to move the workers and master to the same account; otherwise, Hetzner networks would be the straightforward solution for setting up a private network.
  • If you are not familiar with Hetzner Cloud, it is not a problem. You can assume that you get normal Linux machines, but then you are on your own (it does not support VPC peering across accounts as AWS does). Yet you can use all Linux tools available for creating the VPN setup. WireGuard would be my first choice, but I’m open to other techniques.

Go to Source
Author: Philipp Cla├čen

Improving deployment process of agent software

I’m a developer on an agent software team, researching ways to streamline the mass deployment of our agent on clients’ machines. Currently the process involves running a script to determine the exact package needed, downloading a bsx from our company’s FTP and running it, followed by running an executable with flags for registration.

I’ve looked into Ansible as a possible improvement, however I’m unclear on the value it provides in such a case. What’s more, I’m not sure I’m the right person to be authoring the clients’ playbooks for such a thing. At this point it would appear to me that developing our own script to further automate the process would be a better solution, and let the customer DevOps team deploy it in whichever way they see fit.

Before I present these findings to my managers, I would appreciate any opinions on the matter the community could provide.

Go to Source
Author: Ekadanta

Key Weirdness with Dict Return Result from Lookup and Dig

Delving deeper into ansible and its has been fun (will be honest, some parts of it feel tortured as you try to get more programmatic about it. and i know that’s my issue! lol. But really nice to get up and going).

For example, while I am sure there is an easier way to do the below (waiting on a DNS update before proceeding, feel free to suggest!) I was really surprised that I had to use the command as the key here.

Is there a cleaner way to use the lookup and verify the IP in the result is my question really (with a secondary hope that there is a better way than raw output from nslookup or ?? There has to be a pattern I haven’t found.)

Thanks

    - name: Wait for Google DNS to Update
      debug:
        var: lookup('dig', '{{ fqdn }}', '@8.8.8.8')
      register: test_var
      until: new_ip == test_var["lookup('dig', '{{ fqdn }}', '@8.8.8.8')"]
#       new_ip in test_var doesn't work
#      retries: 12
#      delay: 5
    - name: and test_var is?
      debug:
        msg: "{{ test_var }}"

Here is the output:

ok: [localhost] => {
    "msg": {
        "changed": false,
        "failed": false,
        "lookup('dig', 'test.com', '@8.8.8.8')": "192.138.219.231")
    }
}

And thought that maybe query would get me the desired result. Though easier for multiple IPs to work with, still the same key ugliness…

ok: [localhost] => {
    "msg": {
        "changed": false,
        "failed": false,
        "query ('dig', 'yahoo.com', '@8.8.8.8')": [
            "98.137.246.7",
            "98.138.219.231",
            "98.137.246.8",
            "72.30.35.10",
            "98.138.219.232",
            "72.30.35.9"
        ]
    }
}

Go to Source
Author: IGotAHeadache