| Autor | SHA1 | Nachricht | Datum |
|---|---|---|---|
|
|
bc1aa07b43 | Update readme, license and some minors changes | vor 3 Jahren |
|
|
7b05aabdde | Add pattern to ignore | vor 3 Jahren |
|
|
5f6a7db5b1 | fix inventory.py script | vor 3 Jahren |
| @@ -1,18 +1,74 @@ | |||
| # These are some examples of commonly ignored file patterns. | |||
| # You should customize this list as applicable to your project. | |||
| # Learn more about .gitignore: | |||
| # https://www.atlassian.com/git/tutorials/saving-changes/gitignore | |||
| # ---> Ansible | |||
| *.retry | |||
| # ---> Linux | |||
| # Linux Home files | |||
| *~ | |||
| # Linux trash folder which might appear on any partition or disk | |||
| .Trash-* | |||
| # temporary files which can be created if a process still has a handle open of a deleted file | |||
| .fuse_hidden* | |||
| # .nfs files are created when an open file is removed but is still being accessed | |||
| .nfs* | |||
| # KDE directory preferences | |||
| .directory | |||
| # Linux trash folder which might appear on any partition or disk | |||
| .Trash-* | |||
| # Swap, temporary files | |||
| *.swp | |||
| # .nfs files are created when an open file is removed but is still being accessed | |||
| .nfs* | |||
| # Credentials files | |||
| *.creds | |||
| # Node artifact files | |||
| node_modules/ | |||
| dist/ | |||
| # Compiled Java class files | |||
| *.class | |||
| # Compiled Python bytecode | |||
| *.py[cod] | |||
| # Log files | |||
| *.log | |||
| # Package files | |||
| *.jar | |||
| # Maven | |||
| target/ | |||
| dist/ | |||
| # JetBrains IDE | |||
| .idea/ | |||
| # Unit test reports | |||
| TEST*.xml | |||
| # Generated by MacOS | |||
| .DS_Store | |||
| # Generated by Windows | |||
| Thumbs.db | |||
| # Applications | |||
| *.app | |||
| *.exe | |||
| *.war | |||
| # Large media files | |||
| *.mp4 | |||
| *.tiff | |||
| *.avi | |||
| *.flv | |||
| *.mov | |||
| *.wmv | |||
| @@ -1,3 +1,94 @@ | |||
| # ansible.infra.services | |||
| Repos with recipes to deploy some infrastructure services | |||
| [](https://opensource.org/licenses/Apache-2.0) | |||
| Repos with recipes to deploy some infrastructure services | |||
| [**Pourquoi?**](#pourquoi) | | |||
| [**Organisation du code**](#organisation-du-code) | | |||
| [**Utilisation**](#utilisation) | | |||
| [**Guide de contribution**](#guide-de-contribution) | | |||
| ## Pourquoi? | |||
| Afin d'accelerer l'adoption du deploiement d'infrastructure par le code, il est important de fournir un catalogue permettant rapidement de | |||
| creer ou supprimer les ressources les plus utilisees dans une organisation. | |||
| Les avantages de l'infrastructure as code: | |||
| - tracabilite des changement | |||
| - repetabilite, acceleration des deploiements | |||
| - standardisation des ressources et des deploiements | |||
| --- | |||
| ## Organisation du Code | |||
| ``` | |||
| ├── ansible.cfg | |||
| ├── Dockerfile | |||
| ├── files | |||
| │ ├── check_jinja_syntax.py | |||
| │ └── Readme.md | |||
| ├── infra.yml | |||
| ├── inventory | |||
| │ ├── azure_rm.yml | |||
| │ ├── group_vars/ | |||
| │ ├── host_vars/ | |||
| │ └── inventory.py | |||
| ├── LICENSE | |||
| ├── playbook_crowdstrike.yml | |||
| ├── playbook_dynatrace.yml | |||
| ├── ... | |||
| ├── playbook_ssh_known_host.yml | |||
| ├── playbook_...yml | |||
| ├── README.md | |||
| ├── requirements.txt | |||
| ├── roles | |||
| │ ├── iptables | |||
| │ ├── known_hosts | |||
| │ └── ... | |||
| ├── run.sh | |||
| ``` | |||
| Tout le contenue du depot se veut publique, sans secrets ni configurations d'equipes. | |||
| Nous avons a la racine les **playbooks** qui sont appeles pour creer les ressources. | |||
| Ces playbooks peuvent importer des playbooks ou appeler des roles du dossier **roles** | |||
| Le dossier **inventory** permet de configurer sur quel(s) cloud(s) interagir - Azure, AWS, GCP | |||
| Le dossier **vars** contient la definition des ressources a gerer. Il doit etre utiliser uniquement si le depot est unique a une seule equipe. | |||
| Dans le cas d'un depot partage -Ce qui est souhaite- les fichiers de variables (definissant les ressources a gerer) | |||
| doivent etre dans un depot separe, restreint a chaque equipe. | |||
| --- | |||
| ## Utilisation | |||
| Generer l'image docker pour avoir un environnement uniforme entre les executions et avec toutes les librairies necessaires. | |||
| ```bash | |||
| docker build --rm --compress -t <image-name> . | |||
| ``` | |||
| Le fichier **Dockerfile** se trouve a la racine du depot. | |||
| Execution dans le conteneur | |||
| ```bash | |||
| docker run -v </dossier/de/variables/sur/le/host>:/opt/ansible/vars -ti --rm --env-file <fichier/de/credentials> <image-name> | |||
| ansible-playbook -e @vars/<var-file.yml> playbook_adds.yml | |||
| ``` | |||
| Le fichier de credentials permet de definir dans les variables d'environnement du conteneur les elements de connexion au cloud. | |||
| Par exemple pour Azure | |||
| ``` | |||
| AZURE_CLIENT_ID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx | |||
| AZURE_SECRET=xxxxx~xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx | |||
| AZURE_SUBSCRIPTION_ID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx | |||
| AZURE_TENANT=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx | |||
| ``` | |||
| --- | |||
| ## Guide de Contribution | |||
| 1. Cloner le depot et creer votre branche de travail | |||
| 2. Faites vos modifications et tester leur impact sur l'existant. | |||
| 3. Soumettre une pull-request et fusionner vos modifications une fois qu'elles sont validees. | |||
| @@ -1,13 +1,22 @@ | |||
| plugin: azure_rm | |||
| auth_source: auto | |||
| location: canadaeast,canadacentral,eastus | |||
| cloud_environment: "AzureCloud" | |||
| default_host_filters: | |||
| - 'powerstate != "running"' | |||
| hostvar_expressions: | |||
| ansible_host: (public_ipv4_addresses + private_ipv4_addresses) | first | |||
| provider: "'azure'" | |||
| keyed_groups: | |||
| - prefix: azure | |||
| key: tags.none | default('ec2') | |||
| plain_host_names: yes | |||
| plugin: azure_rm | |||
| auth_source: auto | |||
| location: canadaeast,canadacentral,eastus | |||
| cloud_environment: "AzureCloud" | |||
| default_host_filters: | |||
| - 'powerstate != "running"' | |||
| hostvar_expressions: | |||
| ansible_host: (public_ipv4_addresses + private_ipv4_addresses) | first | |||
| private_ipv4_address: private_ipv4_addresses | first | |||
| public_ipv4_address: (public_ipv4_addresses + private_ipv4_addresses) | first | |||
| subscription_id: id.split("/")[2] | |||
| provider: "'azure'" | |||
| conditional_groups: | |||
| linux: "'linux' in os_profile.system" | |||
| windows: "'windows' in os_profile.system" | |||
| keyed_groups: | |||
| - key: tags.none | default('azure') | |||
| separator: '' | |||
| - key: tags.fct | default('azure') | |||
| separator: '' | |||
| prefix: azure | |||
| plain_host_names: yes | |||
| @@ -1,4 +1,9 @@ | |||
| ansible_python_interpreter: "/usr/bin/python3" | |||
| ansible_connection: local | |||
| ... | |||
| --- | |||
| children: | |||
| ungrouped: | |||
| hosts: | |||
| localhost: | |||
| ansible_user: master | |||
| ansible_python_interpreter: "/usr/local/bin/python3" | |||
| ansible_connection: local | |||
| ... | |||
| @@ -1,177 +1,177 @@ | |||
| #!/usr/bin/env python3 | |||
| import sys | |||
| import os | |||
| import yaml | |||
| import json | |||
| class YamlReaderError(Exception): | |||
| pass | |||
| #********************************** | |||
| def static_to_dynamic_inventory(inputdict, hosts={}, groups={}, position='top'): | |||
| '''{ | |||
| "_meta": { | |||
| "hostvars": {} | |||
| }, | |||
| "all": { | |||
| "children": [ | |||
| "ungrouped" | |||
| ] | |||
| }, | |||
| "ungrouped": { | |||
| "children": [ | |||
| ] | |||
| } | |||
| } | |||
| ''' | |||
| outputdict = {'_meta': {'hostvars': {} }} | |||
| newhosts = {} | |||
| newgroups = {} | |||
| for k,v in inputdict.items(): | |||
| if k == 'groups' or k == 'children': | |||
| for group in v: | |||
| if group not in groups: | |||
| groups.update({group: {}}) | |||
| if isinstance(v, dict): | |||
| if 'children' in v: | |||
| if not k in newgroups: | |||
| newgroups = { k: { 'children': [] }} | |||
| for group in v['children']: | |||
| newgroups[k]['children'].append(group) | |||
| groups.update(newgroups) | |||
| if 'groups' in v: | |||
| if not k in newgroups: | |||
| newgroups = { k: { 'children': [] }} | |||
| for group in v['groups']: | |||
| newgroups[k]['children'].append(group) | |||
| groups.update(newgroups) | |||
| if 'hosts' in v: | |||
| if isinstance(v['hosts'], list): | |||
| msg = """ | |||
| Hosts should not be define as a list: | |||
| Error appear on v['hosts'] | |||
| Do this: | |||
| hosts: | |||
| host1: | |||
| host2: | |||
| Instead of this: | |||
| hosts: | |||
| - host1 | |||
| - host2 | |||
| Exit on Error (1) | |||
| """ | |||
| sys.stderr.write(msg) | |||
| exit(1) | |||
| for host in list(v['hosts']): | |||
| if k in groups: | |||
| if 'hosts' in groups[k]: | |||
| groups[k]['hosts'].append(host) | |||
| else: | |||
| groups[k]['hosts'] = [host] | |||
| else: | |||
| groups.update({k: {'hosts': [host]}}) | |||
| if v['hosts'][host] is None: | |||
| if not host in newhosts: | |||
| newhosts[host] = {} | |||
| elif 'vars' in v['hosts'][host]: | |||
| newhosts.update({host: v['hosts'][host]}) | |||
| else: | |||
| for key,val in v['hosts'][host].items(): | |||
| if host in newhosts: | |||
| newhosts[host].update({key: val}) | |||
| else: | |||
| newhosts[host] = {key: val} | |||
| hosts.update(newhosts) | |||
| if 'vars' in v: | |||
| if position == 'group': | |||
| if k in newgroups: | |||
| newgroups[k].update({'vars': v['vars']}) | |||
| else: | |||
| newgroups[k] = {'vars': v['vars']} | |||
| groups.update(newgroups) | |||
| if k == 'groups' or k == 'children': | |||
| newposition = 'group' | |||
| elif k == 'hosts': | |||
| newposition = 'host' | |||
| else: | |||
| newposition = 'data' | |||
| valid_group_syntax = ['children', 'groups', 'hosts', 'vars', '', None] | |||
| if position == 'group': | |||
| for word in v: | |||
| if not word in valid_group_syntax: | |||
| print("Syntax error in definition of group: {}".format(k)) | |||
| print("\"{}\" is not a valid syntax key in group".format(word)) | |||
| exit(1) | |||
| outputdict.update(static_to_dynamic_inventory(v, hosts, groups, newposition)) | |||
| outputdict['_meta']['hostvars'].update(hosts) | |||
| outputdict.update(groups) | |||
| return outputdict | |||
| #********************************** | |||
| def data_merge(inst1, inst2): | |||
| try: | |||
| if (inst1 is None or isinstance(inst1, str) | |||
| or isinstance(inst1, int) | |||
| or isinstance(inst1, float) | |||
| ): | |||
| inst1 = inst2 | |||
| elif isinstance(inst1, list): | |||
| if isinstance(inst2, list): | |||
| inst1 = inst1 + inst2 | |||
| else: | |||
| inst1.append(inst2) | |||
| elif isinstance(inst1, dict): | |||
| if isinstance(inst2, dict): | |||
| inst1.update(inst2) | |||
| else: | |||
| raise YamlReaderError('Cannot merge non-dict "%s" into dict "%s"' % (inst2, inst1)) | |||
| except TypeError as e: | |||
| raise YamlReaderError('TypeError "%s" when merging "%s" into "%s"' % | |||
| (e, inst1, inst2)) | |||
| return inst1 | |||
| #********************************** | |||
| def load_static_inventory(path, static): | |||
| ##- load static | |||
| #add filename to dir | |||
| files = {} | |||
| files['static'] = {} | |||
| files['static']['dir'] = path + '/static' | |||
| files['static']['files'] = [] | |||
| static_hosts = [] | |||
| #get all *.yml files | |||
| for root, directory, filename in sorted(os.walk(path)): | |||
| for file in filename: | |||
| if file.endswith(('.yml', '.yaml')): | |||
| files['static']['files'].append(os.path.join(root, file)) | |||
| filecontent = None | |||
| filecontent = yaml.load( | |||
| open(os.path.join(root, file), "rb").read(), | |||
| Loader=yaml.FullLoader | |||
| ) | |||
| if type(filecontent) == dict: | |||
| filecontent = static_to_dynamic_inventory(filecontent) | |||
| if 'hostvars' in filecontent['_meta']: | |||
| for hostname in filecontent['_meta']['hostvars']: | |||
| static_hosts.append(hostname) | |||
| static.update(filecontent) | |||
| static_hosts = sorted(set(static_hosts)) | |||
| return static, static_hosts | |||
| #********************************** | |||
| def main(): | |||
| static = {'_meta': {'hostvars': {}}} | |||
| static, static_hosts = load_static_inventory(os.path.dirname(__file__), static) | |||
| print(format(json.dumps(static, indent=2))) | |||
| #print(format(json.dumps(static_hosts, indent=2))) | |||
| if __name__ == '__main__': | |||
| main() | |||
| #!/usr/bin/env python3 | |||
| import sys | |||
| import os | |||
| import yaml | |||
| import json | |||
| class YamlReaderError(Exception): | |||
| pass | |||
| #********************************** | |||
| def static_to_dynamic_inventory(inputdict, hosts={}, groups={}, position='top'): | |||
| '''{ | |||
| "_meta": { | |||
| "hostvars": {} | |||
| }, | |||
| "all": { | |||
| "children": [ | |||
| "ungrouped" | |||
| ] | |||
| }, | |||
| "ungrouped": { | |||
| "children": [ | |||
| ] | |||
| } | |||
| } | |||
| ''' | |||
| outputdict = {'_meta': {'hostvars': {} }} | |||
| newhosts = {} | |||
| newgroups = {} | |||
| for k,v in inputdict.items(): | |||
| if k == 'groups' or k == 'children': | |||
| for group in v: | |||
| if group not in groups: | |||
| groups.update({group: {}}) | |||
| if isinstance(v, dict): | |||
| if 'children' in v: | |||
| if not k in newgroups: | |||
| newgroups = { k: { 'children': [] }} | |||
| for group in v['children']: | |||
| newgroups[k]['children'].append(group) | |||
| groups.update(newgroups) | |||
| if 'groups' in v: | |||
| if not k in newgroups: | |||
| newgroups = { k: { 'children': [] }} | |||
| for group in v['groups']: | |||
| newgroups[k]['children'].append(group) | |||
| groups.update(newgroups) | |||
| if 'hosts' in v: | |||
| if isinstance(v['hosts'], list): | |||
| msg = """ | |||
| Hosts should not be define as a list: | |||
| Error appear on v['hosts'] | |||
| Do this: | |||
| hosts: | |||
| host1: | |||
| host2: | |||
| Instead of this: | |||
| hosts: | |||
| - host1 | |||
| - host2 | |||
| Exit on Error (1) | |||
| """ | |||
| sys.stderr.write(msg) | |||
| exit(1) | |||
| for host in list(v['hosts']): | |||
| if k in groups: | |||
| if 'hosts' in groups[k]: | |||
| groups[k]['hosts'].append(host) | |||
| else: | |||
| groups[k]['hosts'] = [host] | |||
| else: | |||
| groups.update({k: {'hosts': [host]}}) | |||
| if v['hosts'][host] is None: | |||
| if not host in newhosts: | |||
| newhosts[host] = {} | |||
| elif 'vars' in v['hosts'][host]: | |||
| newhosts.update({host: v['hosts'][host]}) | |||
| else: | |||
| for key,val in v['hosts'][host].items(): | |||
| if host in newhosts: | |||
| newhosts[host].update({key: val}) | |||
| else: | |||
| newhosts[host] = {key: val} | |||
| hosts.update(newhosts) | |||
| if 'vars' in v: | |||
| if position == 'group': | |||
| if k in newgroups: | |||
| newgroups[k].update({'vars': v['vars']}) | |||
| else: | |||
| newgroups[k] = {'vars': v['vars']} | |||
| groups.update(newgroups) | |||
| if k == 'groups' or k == 'children': | |||
| newposition = 'group' | |||
| elif k == 'hosts': | |||
| newposition = 'host' | |||
| else: | |||
| newposition = 'data' | |||
| valid_group_syntax = ['children', 'groups', 'hosts', 'vars', '', None] | |||
| if position == 'group': | |||
| for word in v: | |||
| if not word in valid_group_syntax: | |||
| print("Syntax error in definition of group: {}".format(k)) | |||
| print("\"{}\" is not a valid syntax key in group".format(word)) | |||
| exit(1) | |||
| outputdict.update(static_to_dynamic_inventory(v, hosts, groups, newposition)) | |||
| outputdict['_meta']['hostvars'].update(hosts) | |||
| outputdict.update(groups) | |||
| return outputdict | |||
| #********************************** | |||
| def data_merge(inst1, inst2): | |||
| try: | |||
| if (inst1 is None or isinstance(inst1, str) | |||
| or isinstance(inst1, int) | |||
| or isinstance(inst1, float) | |||
| ): | |||
| inst1 = inst2 | |||
| elif isinstance(inst1, list): | |||
| if isinstance(inst2, list): | |||
| inst1 = inst1 + inst2 | |||
| else: | |||
| inst1.append(inst2) | |||
| elif isinstance(inst1, dict): | |||
| if isinstance(inst2, dict): | |||
| inst1.update(inst2) | |||
| else: | |||
| raise YamlReaderError('Cannot merge non-dict "%s" into dict "%s"' % (inst2, inst1)) | |||
| except TypeError as e: | |||
| raise YamlReaderError('TypeError "%s" when merging "%s" into "%s"' % | |||
| (e, inst1, inst2)) | |||
| return inst1 | |||
| #********************************** | |||
| def load_static_inventory(path, static): | |||
| ##- load static | |||
| #add filename to dir | |||
| files = {} | |||
| files['static'] = {} | |||
| files['static']['dir'] = path + '/static' | |||
| files['static']['files'] = [] | |||
| static_hosts = [] | |||
| #get all *.yml files | |||
| for root, directory, filename in sorted(os.walk(path)): | |||
| for file in filename: | |||
| if file.endswith(('.yml', '.yaml')): | |||
| files['static']['files'].append(os.path.join(root, file)) | |||
| filecontent = None | |||
| filecontent = yaml.load( | |||
| open(os.path.join(root, file), "rb").read(), | |||
| Loader=yaml.FullLoader | |||
| ) | |||
| if type(filecontent) == dict: | |||
| filecontent = static_to_dynamic_inventory(filecontent) | |||
| if 'hostvars' in filecontent['_meta']: | |||
| for hostname in filecontent['_meta']['hostvars']: | |||
| static_hosts.append(hostname) | |||
| static.update(filecontent) | |||
| static_hosts = sorted(set(static_hosts)) | |||
| return static, static_hosts | |||
| #********************************** | |||
| def main(): | |||
| static = {'_meta': {'hostvars': {}}} | |||
| static, static_hosts = load_static_inventory(os.path.dirname(__file__), static) | |||
| print(format(json.dumps(static, indent=2))) | |||
| #print(format(json.dumps(static_hosts, indent=2))) | |||
| if __name__ == '__main__': | |||
| main() | |||
| @@ -1,10 +1,10 @@ | |||
| - name: Update ssh known host | |||
| hosts: | |||
| - all,!localhost | |||
| tags: | |||
| - "ssh" | |||
| gather_facts: no | |||
| roles: | |||
| - {role: known_hosts, tags: ["ssh"]} | |||
| ... | |||
| --- | |||
| - name: Update ssh known host | |||
| hosts: | |||
| - all,!localhost | |||
| tags: | |||
| - "ssh" | |||
| gather_facts: no | |||
| roles: | |||
| - {role: known_hosts, tags: ["ssh"]} | |||
| ... | |||
| @@ -1,2 +1,2 @@ | |||
| # iptables | |||
| A role to update iptables rules and save them | |||
| # iptables | |||
| A role to update iptables rules and save them | |||
| @@ -1,4 +1,4 @@ | |||
| iptables_config_file: "/etc/sysconfig/iptables" | |||
| iptables_rules: [] | |||
| ... | |||
| --- | |||
| iptables_config_file: "/etc/sysconfig/iptables" | |||
| iptables_rules: [] | |||
| ... | |||
| @@ -1,79 +1,79 @@ | |||
| - name: Ensure iptables is present | |||
| apt: | |||
| name: 'iptables' | |||
| update_cache: true | |||
| state: present | |||
| when: ansible_facts.os_family == "Debian" | |||
| - name: Ensure iptables is present | |||
| yum: | |||
| name: 'iptables' | |||
| update_cache: true | |||
| state: present | |||
| when: ansible_facts.os_family == "RedHat" | |||
| - name: Save current iptable config if exist | |||
| copy: | |||
| dest: "{{ iptables_config_file }}.fallback" | |||
| src: "{{ iptables_config_file }}" | |||
| remote_src: yes | |||
| failed_when: false | |||
| - name: Apply rules | |||
| iptables: | |||
| ip_version: "{{ item.ip_version | default('ipv4', true) }}" | |||
| action: "{{ item.action | default(omit, true) }}" | |||
| rule_num: "{{ item.rule_num | default(omit, true) }}" | |||
| chain: "{{ item.chain | default('INPUT', true) }}" | |||
| flush: "{{ item.flush | default(omit, true) }}" | |||
| policy: "{{ item.policy | default(omit, true) }}" | |||
| table: "{{ item.table | default('filter', true) }}" | |||
| source: "{{ item.source | default(omit, true) }}" | |||
| destination: "{{ item.destination | default(omit, true) }}" | |||
| src_range: "{{ item.src_range | default(omit, true) }}" | |||
| dst_range: "{{ item.dst_range | default(omit, true) }}" | |||
| source_port: "{{ item.source_port | default(omit, true) }}" | |||
| destination_port: "{{ item.destination_port | default(omit, true) }}" | |||
| protocol: "{{ item.protocol | default(omit, true) }}" | |||
| icmp_type: "{{ item.icmp_type | default(omit, true) }}" | |||
| in_interface: "{{ item.in_interface | default(omit, true) }}" | |||
| out_interface: "{{ item.out_interface | default(omit, true) }}" | |||
| goto: "{{ item.goto | default(omit, true) }}" | |||
| jump: "{{ item.jump | default(omit, true) }}" | |||
| cstate: "{{ item.cstate | default(omit, true) }}" | |||
| fragment: "{{ item.fragment | default(omit, true) }}" | |||
| gateway: "{{ item.gateway | default(omit, true) }}" | |||
| gid_owner: "{{ item.gid_owner | default(omit, true) }}" | |||
| uid_owner: "{{ item.uid_owner | default(omit, true) }}" | |||
| limit: "{{ item.limit | default(omit, true) }}" | |||
| limit_burst: "{{ item.limit_burst | default(omit, true) }}" | |||
| log_level: "{{ item.log_level | default(omit, true) }}" | |||
| log_prefix: "{{ item.log_prefix | default(omit, true) }}" | |||
| match: "{{ item.match | default(omit, true) }}" | |||
| reject_with: "{{ item.reject_with | default(omit, true) }}" | |||
| set_counters: "{{ item.set_counters | default(omit, true) }}" | |||
| set_dscp_mark: "{{ item.set_dscp_mark | default(omit, true) }}" | |||
| set_dscp_mark_class: "{{ item.set_dscp_mark_class | default(omit, true) }}" | |||
| syn: "{{ item.syn | default('ignore', true) }}" | |||
| tcp_flags: "{{ item.tcp_flags | default(omit, true) }}" | |||
| to_source: "{{ item.to_source | default(omit, true) }}" | |||
| to_destination: "{{ item.to_destination | default(omit, true) }}" | |||
| to_ports: "{{ item.to_ports | default(omit, true) }}" | |||
| state: "{{ item.state | default('present', true) }}" | |||
| with_items: "{{ iptables_rules }}" | |||
| - name: Ensure iptables service is running | |||
| service: | |||
| name: iptables | |||
| state: started | |||
| enabled: yes | |||
| - name: Save current iptables rules | |||
| shell: "iptables-save > {{ iptables_config_file }} >> {{ iptables_config_file }}" | |||
| - name: Reload saved iptables rules | |||
| service: | |||
| name: iptables | |||
| state: reloaded | |||
| ... | |||
| --- | |||
| - name: Ensure iptables is present | |||
| apt: | |||
| name: 'iptables' | |||
| update_cache: true | |||
| state: present | |||
| when: ansible_facts.os_family == "Debian" | |||
| - name: Ensure iptables is present | |||
| yum: | |||
| name: 'iptables' | |||
| update_cache: true | |||
| state: present | |||
| when: ansible_facts.os_family == "RedHat" | |||
| - name: Save current iptable config if exist | |||
| copy: | |||
| dest: "{{ iptables_config_file }}.fallback" | |||
| src: "{{ iptables_config_file }}" | |||
| remote_src: yes | |||
| failed_when: false | |||
| - name: Apply rules | |||
| iptables: | |||
| ip_version: "{{ item.ip_version | default('ipv4', true) }}" | |||
| action: "{{ item.action | default(omit, true) }}" | |||
| rule_num: "{{ item.rule_num | default(omit, true) }}" | |||
| chain: "{{ item.chain | default('INPUT', true) }}" | |||
| flush: "{{ item.flush | default(omit, true) }}" | |||
| policy: "{{ item.policy | default(omit, true) }}" | |||
| table: "{{ item.table | default('filter', true) }}" | |||
| source: "{{ item.source | default(omit, true) }}" | |||
| destination: "{{ item.destination | default(omit, true) }}" | |||
| src_range: "{{ item.src_range | default(omit, true) }}" | |||
| dst_range: "{{ item.dst_range | default(omit, true) }}" | |||
| source_port: "{{ item.source_port | default(omit, true) }}" | |||
| destination_port: "{{ item.destination_port | default(omit, true) }}" | |||
| protocol: "{{ item.protocol | default(omit, true) }}" | |||
| icmp_type: "{{ item.icmp_type | default(omit, true) }}" | |||
| in_interface: "{{ item.in_interface | default(omit, true) }}" | |||
| out_interface: "{{ item.out_interface | default(omit, true) }}" | |||
| goto: "{{ item.goto | default(omit, true) }}" | |||
| jump: "{{ item.jump | default(omit, true) }}" | |||
| cstate: "{{ item.cstate | default(omit, true) }}" | |||
| fragment: "{{ item.fragment | default(omit, true) }}" | |||
| gateway: "{{ item.gateway | default(omit, true) }}" | |||
| gid_owner: "{{ item.gid_owner | default(omit, true) }}" | |||
| uid_owner: "{{ item.uid_owner | default(omit, true) }}" | |||
| limit: "{{ item.limit | default(omit, true) }}" | |||
| limit_burst: "{{ item.limit_burst | default(omit, true) }}" | |||
| log_level: "{{ item.log_level | default(omit, true) }}" | |||
| log_prefix: "{{ item.log_prefix | default(omit, true) }}" | |||
| match: "{{ item.match | default(omit, true) }}" | |||
| reject_with: "{{ item.reject_with | default(omit, true) }}" | |||
| set_counters: "{{ item.set_counters | default(omit, true) }}" | |||
| set_dscp_mark: "{{ item.set_dscp_mark | default(omit, true) }}" | |||
| set_dscp_mark_class: "{{ item.set_dscp_mark_class | default(omit, true) }}" | |||
| syn: "{{ item.syn | default('ignore', true) }}" | |||
| tcp_flags: "{{ item.tcp_flags | default(omit, true) }}" | |||
| to_source: "{{ item.to_source | default(omit, true) }}" | |||
| to_destination: "{{ item.to_destination | default(omit, true) }}" | |||
| to_ports: "{{ item.to_ports | default(omit, true) }}" | |||
| state: "{{ item.state | default('present', true) }}" | |||
| with_items: "{{ iptables_rules }}" | |||
| - name: Ensure iptables service is running | |||
| service: | |||
| name: iptables | |||
| state: started | |||
| enabled: yes | |||
| - name: Save current iptables rules | |||
| shell: "iptables-save > {{ iptables_config_file }} >> {{ iptables_config_file }}" | |||
| - name: Reload saved iptables rules | |||
| service: | |||
| name: iptables | |||
| state: reloaded | |||
| ... | |||
| @@ -1,3 +1,3 @@ | |||
| # known_hosts | |||
| A role to update ssh_known_hosts | |||
| This is mostly useful for ansible control node | |||
| # known_hosts | |||
| A role to update ssh_known_hosts | |||
| This is mostly useful for ansible control node | |||
| @@ -1,3 +1,3 @@ | |||
| clean_known_hosts: True | |||
| ... | |||
| --- | |||
| clean_known_hosts: True | |||
| ... | |||
| @@ -1,37 +1,37 @@ | |||
| - name: Ensure ssh dir exist | |||
| file: | |||
| path: "~/.ssh" | |||
| state: directory | |||
| mode: 0750 | |||
| delegate_to: localhost | |||
| - name: Ensure known_hosts file exist | |||
| copy: | |||
| content: "" | |||
| dest: "~/.ssh/known_hosts" | |||
| force: no | |||
| mode: 0640 | |||
| delegate_to: localhost | |||
| - name: Remove ip | |||
| shell: "ssh-keygen -R {{ public_ipv4_address }}" | |||
| failed_when: false | |||
| changed_when: false | |||
| when: | |||
| - clean_known_hosts == True | |||
| delegate_to: localhost | |||
| - name: Search ip | |||
| shell: "ssh-keygen -F {{ public_ipv4_address }}" | |||
| failed_when: false | |||
| changed_when: false | |||
| register: searchip | |||
| delegate_to: localhost | |||
| - name: Insert | |||
| shell: "ssh-keyscan {{ public_ipv4_address }} >> ~/.ssh/known_hosts" | |||
| when: | |||
| - searchip.rc != 0 | |||
| delegate_to: localhost | |||
| ... | |||
| --- | |||
| - name: Ensure ssh dir exist | |||
| file: | |||
| path: "~/.ssh" | |||
| state: directory | |||
| mode: 0750 | |||
| delegate_to: localhost | |||
| - name: Ensure known_hosts file exist | |||
| copy: | |||
| content: "" | |||
| dest: "~/.ssh/known_hosts" | |||
| force: no | |||
| mode: 0640 | |||
| delegate_to: localhost | |||
| - name: Remove ip | |||
| shell: "ssh-keygen -R {{ public_ipv4_address }}" | |||
| failed_when: false | |||
| changed_when: false | |||
| when: | |||
| - clean_known_hosts == True | |||
| delegate_to: localhost | |||
| - name: Search ip | |||
| shell: "ssh-keygen -F {{ public_ipv4_address }}" | |||
| failed_when: false | |||
| changed_when: false | |||
| register: searchip | |||
| delegate_to: localhost | |||
| - name: Insert | |||
| shell: "ssh-keyscan {{ public_ipv4_address }} >> ~/.ssh/known_hosts" | |||
| when: | |||
| - searchip.rc != 0 | |||
| delegate_to: localhost | |||
| ... | |||
| @@ -1,58 +1,58 @@ | |||
| #!/usr/bin/env bash | |||
| # ENV Vars: | |||
| # VAGRANT_MODE - [0,1] | |||
| # - to be used with bovine-inventory's vagrant mode | |||
| # ANSIBLE_RUN_MODE - ["playbook","ad-hoc"] | |||
| # - specify which mode to run ansible in | |||
| # ANSIBLE_PLAYBOOK_FILE - defaults to "infra.yml" | |||
| # - specify playbook to pass to ansible-playbook | |||
| # - NB: only used when run mode is "playbook" | |||
| # ANSIBLE_BASE_ARA - ["0","1"] | |||
| # - a bash STRING (not numeral) to enable ARA | |||
| # VAULT_PASSWORD_FILE - | |||
| export ANSIBLE_RUN_MODE="${ANSIBLE_RUN_MODE:-playbook}" | |||
| export ANSIBLE_PLAYBOOK_FILE="${ANSIBLE_PLAYBOOK_FILE:-infra.yml}" | |||
| export VAULT_PASSWORD_FILE="${VAULT_PASSWORD_FILE:-${HOME}/.ssh/creds/vault_password.txt}" | |||
| export VAGRANT_MODE="${VAGRANT_MODE:-0}" | |||
| run_ansible() { | |||
| INOPTS=( "$@" ) | |||
| VAULTOPTS="" | |||
| # Plaintext vault decryption key, not checked into SCM | |||
| if [ -f "${VAULT_PASSWORD_FILE}" ]; then | |||
| VAULTOPTS="--vault-password-file=${VAULT_PASSWORD_FILE}" | |||
| if [ ${ANSIBLE_RUN_MODE} == 'playbook' ]; then | |||
| time ansible-playbook --diff ${VAULTOPTS} "${ANSIBLE_PLAYBOOK_FILE}" "${INOPTS[@]}" | |||
| return $? | |||
| elif [ ${ANSIBLE_RUN_MODE} == 'ad-hoc' ]; then | |||
| time ansible --diff ${VAULTOPTS} "${INOPTS[@]}" | |||
| return $? | |||
| fi | |||
| else | |||
| if [ "${ANSIBLE_RUN_MODE}" == 'playbook' ]; then | |||
| echo "Vault password file unreachable. Skip steps require vault." | |||
| VAULTOPTS="--skip-tags=requires_vault" | |||
| #echo "ansible-playbook --diff $VAULTOPTS ${INOPTS[@]} ${ANSIBLE_PLAYBOOK_FILE}" && \ | |||
| time ansible-playbook --diff ${VAULTOPTS} "${ANSIBLE_PLAYBOOK_FILE}" "${INOPTS[@]}" | |||
| return $? | |||
| elif [ "${ANSIBLE_RUN_MODE}" == 'ad-hoc' ]; then | |||
| #echo "ansible --diff $VAULTOPTS ${INOPTS[@]}" && \ | |||
| time ansible --diff ${VAULTOPTS} "${INOPTS[@]}" | |||
| return $? | |||
| else | |||
| echo "Invalid run mode: ${ANSIBLE_RUN_MODE}" | |||
| exit 15 | |||
| fi | |||
| fi | |||
| } | |||
| if [ "${VAGRANT_MODE}" -eq 1 ]; then | |||
| export ANSIBLE_SSH_ARGS="-o UserKnownHostsFile=/dev/null" | |||
| export ANSIBLE_HOST_KEY_CHECKING=false | |||
| fi | |||
| run_ansible "$@" | |||
| retcode=$? | |||
| exit $retcode | |||
| #!/usr/bin/env bash | |||
| # ENV Vars: | |||
| # VAGRANT_MODE - [0,1] | |||
| # - to be used with bovine-inventory's vagrant mode | |||
| # ANSIBLE_RUN_MODE - ["playbook","ad-hoc"] | |||
| # - specify which mode to run ansible in | |||
| # ANSIBLE_PLAYBOOK_FILE - defaults to "infra.yml" | |||
| # - specify playbook to pass to ansible-playbook | |||
| # - NB: only used when run mode is "playbook" | |||
| # ANSIBLE_BASE_ARA - ["0","1"] | |||
| # - a bash STRING (not numeral) to enable ARA | |||
| # VAULT_PASSWORD_FILE - | |||
| export ANSIBLE_RUN_MODE="${ANSIBLE_RUN_MODE:-playbook}" | |||
| export ANSIBLE_PLAYBOOK_FILE="${ANSIBLE_PLAYBOOK_FILE:-infra.yml}" | |||
| export VAULT_PASSWORD_FILE="${VAULT_PASSWORD_FILE:-${HOME}/.ssh/creds/vault_password.txt}" | |||
| export VAGRANT_MODE="${VAGRANT_MODE:-0}" | |||
| run_ansible() { | |||
| INOPTS=( "$@" ) | |||
| VAULTOPTS="" | |||
| # Plaintext vault decryption key, not checked into SCM | |||
| if [ -f "${VAULT_PASSWORD_FILE}" ]; then | |||
| VAULTOPTS="--vault-password-file=${VAULT_PASSWORD_FILE}" | |||
| if [ ${ANSIBLE_RUN_MODE} == 'playbook' ]; then | |||
| time ansible-playbook --diff ${VAULTOPTS} "${ANSIBLE_PLAYBOOK_FILE}" "${INOPTS[@]}" | |||
| return $? | |||
| elif [ ${ANSIBLE_RUN_MODE} == 'ad-hoc' ]; then | |||
| time ansible --diff ${VAULTOPTS} "${INOPTS[@]}" | |||
| return $? | |||
| fi | |||
| else | |||
| if [ "${ANSIBLE_RUN_MODE}" == 'playbook' ]; then | |||
| echo "Vault password file unreachable. Skip steps require vault." | |||
| VAULTOPTS="--skip-tags=requires_vault" | |||
| #echo "ansible-playbook --diff $VAULTOPTS ${INOPTS[@]} ${ANSIBLE_PLAYBOOK_FILE}" && \ | |||
| time ansible-playbook --diff ${VAULTOPTS} "${ANSIBLE_PLAYBOOK_FILE}" "${INOPTS[@]}" | |||
| return $? | |||
| elif [ "${ANSIBLE_RUN_MODE}" == 'ad-hoc' ]; then | |||
| #echo "ansible --diff $VAULTOPTS ${INOPTS[@]}" && \ | |||
| time ansible --diff ${VAULTOPTS} "${INOPTS[@]}" | |||
| return $? | |||
| else | |||
| echo "Invalid run mode: ${ANSIBLE_RUN_MODE}" | |||
| exit 15 | |||
| fi | |||
| fi | |||
| } | |||
| if [ "${VAGRANT_MODE}" -eq 1 ]; then | |||
| export ANSIBLE_SSH_ARGS="-o UserKnownHostsFile=/dev/null" | |||
| export ANSIBLE_HOST_KEY_CHECKING=false | |||
| fi | |||
| run_ansible "$@" | |||
| retcode=$? | |||
| exit $retcode | |||