How to use terraform-docs with no modules (only root module)?

I have been trying to use terraform-docs with a Terraform sample infrastructure which is not organized into modules, so it only has the root module (as we read in modules docs). I started by installing terraform-docs with go get, according to the instructions on GitHub. The terraform-docs syntax to generate markdown docs is terraform-docs markdown ./my-terraform-module. If I try to pass a .tf file as argument, I get:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x79eb84]

goroutine 1 [running]:*cfgreader).exist(0xc000156000, 0xc000156000, 0x2, 0xc000018e80)
    /home/username/go/pkg/mod/ +0xe4, 0xc00014c1f0, 0x1, 0x1, 0x0, 0x0)
    /home/username/go/pkg/mod/ +0x1b9*Command).execute(0xc0000b1b80, 0xc00014c1d0, 0x1, 0x1, 0xc0000b1b80, 0xc00014c1d0)
    /home/username/go/pkg/mod/ +0x514*Command).ExecuteC(0xc0000b0dc0, 0x43c027, 0xba3e80, 0xc000012090)
    /home/username/go/pkg/mod/ +0x349*Command).Execute(...)
    /home/username/go/pkg/mod/, 0xc00001e238)
    /home/username/go/pkg/mod/ +0x2b

When I pass the files directory as argument, I get a markdown output containing only the requirements, providers and input section only contains variables and their values. So, I ask: is it possible to use terraform-docs with the root module?

Thanks in advance

Go to Source
Author: rtrigo

Get key value from a go template in Prometheus/Alertmanager/Jiralert attempt to map severity to Jira task

I am using Prometheus Alertmanager integration with Jiralert go tool to create tasks out of alerts in Jira.
Everything works as expected aside from the fact that I don’t know how to inherit severity from the current alert. I am hard-coding a value at this point. I have tried to get the value from .Labels (which is get successfully during template, but not if I call it from config yaml.

# Content of jiralert.yaml
# Global defaults, applied to all receivers where not explicitly overridden. Optional.
  # API access fields.
  user: ''
  password: 'randompassword'

  # The type of JIRA issue to create. Required.
  issue_type: Alert
  # Issue priority. Optional.
  #priority: Low
  # Go template invocation for generating the summary. Required.
  summary: '{{ template "jira.summary" . }}'
  # Go template invocation for generating the description. Optional.
  description: '{{ template "jira.description" . }}'
  # State to transition into when reopening a closed issue. Required.
  reopen_state: "To Do"
  # Do not reopen issues with this resolution. Optional.
  wont_fix_resolution: "Won't Fix"
  # Amount of time after being closed that an issue should be reopened, after which, a new issue is created.
  # Optional (default: always reopen)
  reopen_duration: 0h

# Receiver definitions. At least one must be defined.
# Non-prod cluster for testing
  - name: 'prometheus-test-non-production'
    # JIRA project to create the issue in. Required.
    summary: '{{ template "jira.testenv.summary" . }}'
    description: '{{ template "jira.testenv.description" . }}'
    project: AS
    # Copy all Prometheus labels into separate JIRA labels. Optional (default: false).
    add_group_labels: false
      customfield_10600: { "value": '{{ template ".Alerts.Front().Labels.severity.Value" }}' }
template: jiralert_v2.tmpl
# jiralert_v2.tmpl
{{ define "jira.summary" }}{{end}}
{{ define "jira.description" }}{{end}}

{{ define "jira.testenv.summary" }}[Prometheus][K8s/testenv-ops][{{ .Status | toUpper }}{{- if eq .Status "firing" -}}:{{- .Alerts.Firing | len -}}{{- end -}}]{{ .CommonLabels.alertname }} for {{ .CommonLabels.job }}{{- end }}{{- if gt (len .CommonLabels) (len .GroupLabels) -}}{{" "}}({{- with .CommonLabels.Remove .GroupLabels.Names }}{{- range $index, $label := .SortedPairs -}}{{ if $index }}, {{ end }}{{- $label.Name }}="{{ $label.Value -}}"{{- end }}{{- end -}}){{- end }}
{{ define "jira.testenv.description" }}
    {{ with index .Alerts 0 -}}
    *URL: <{{ .GeneratorURL }}>*
    {{- if .Annotations.runbook }} :notebook: *<{{ .Annotations.runbook }}|Runbook>*{{ end }}
    {{ end }}
    Kubernetes Cluster: testenv-ops
    Prometheus Alert Details:
    {{ range .Alerts}}
      * Alert Labels:
    {{ range .Labels.SortedPairs }}    ** {{ .Name }}: {{ .Value }}
    {{ end }}
      * Alert Description: {{ .Annotations.message }}
    {{ end }}
{{- end }}

Go to Source
Author: anVzdGFub3RoZXJodW1hbg

Can’t upload data to google cloud storage from a chaincode instance in hyperledger fabric

I tried to write a chaincode such that when it’s executed in a peer instance, it uploads data to google cloud storage bucket. The file I’ll be uploading is actually stored as small file chunks in a folder, so that different peers upload different chunks to the GCS bucket. I’m using the fabcar blueprint to develop this chaincode, and test-network script files to execute the chaincode. The function I used to upload data is working well when I executed locally, but when I tried to use in the chaincode, it’s showing Error: endorsement failure during invoke. response: status:500 message:”error in simulation: failed to execute transaction 49a9b96088ff2f32906a6b6c9ba1f4ac0a530779bf8d506b176fcdfb8818afe2: error sending: chaincode stream terminated” (What I’m doing might sound crazy, but I’m new to this hyperledger fabric)

Below is the code sample I’m executing (I think it’s the problem with uploadGCS or InitLedger function)(FYI: chaincode execution runs InitLedger function only, which ofcourse uses uploadGCS function)

package main

import (

type SmartContract struct {

type Data struct {
    Owner  string `json:"owner"`
    File string `json:"file"`
    FileChunkNumber string `json:"filechunknumber"`
    SHA256 string `json:"sha256"`

func uploadGCS(owner, filechunklocation, uploadlocation string) error {
    ct := context.Background()
    creds, err := google.FindDefaultCredentials(ct, storage.ScopeReadOnly)
    if err != nil {
            log.Fatal("GoT an err %s", err)

    client, err := storage.NewClient(ct, option.WithCredentials(creds))
    if err != nil {
        return fmt.Errorf("storage.NewClient: %v", err)
    defer client.Close()

    // Open local file.
    f, err := os.Open(filechunklocation)
    if err != nil {
        return fmt.Errorf("os.Open: %v", err)
    defer f.Close()

    ct, cancel := context.WithTimeout(ct, time.Second*50)
    defer cancel()

    // Upload an object with storage.Writer.
    wc := client.Bucket("btp2016bcs0015-cloud-storage").Object(uploadlocation).NewWriter(ct)
    if _, err = io.Copy(wc, f); err != nil {
        return fmt.Errorf("io.Copy: %v", err)
    if err := wc.Close(); err != nil {
        return fmt.Errorf("Writer.Close: %v", err)
    return nil

func (s *SmartContract) InitLedger(ctx contractapi.TransactionContextInterface) error {
    filelocation := "/home/busyfriend/go/src/"
    data := []Data{
        Data{Owner: "ID126859", File: "samplefile.pdf", FileChunkNumber: "1", SHA256: "eb73a20d61c1fb294b0eba4d35568d10c8ddbfe2544a3cacc959d640077673f5"},
        Data{Owner: "ID126859", File: "samplefile.pdf", FileChunkNumber: "2", SHA256: "92dd8ea8aa0da4a48a2cb45ae38f70f17526b6b50ef80c44367a56de6ec9abf9"},
        Data{Owner: "ID126859", File: "samplefile.pdf", FileChunkNumber: "3", SHA256: "b97027d261d01f86d1e514a52886add096ddc4e66d15d01e53516dd9d5cfb20b"},
        Data{Owner: "ID126859", File: "samplefile.pdf", FileChunkNumber: "4", SHA256: "377582f5e62dc3b34e40741f2d70d8f37a029856f75cbe68a6659328258e23a3"},
        Data{Owner: "ID126859", File: "samplefile.pdf", FileChunkNumber: "5", SHA256: "afb6c6d112d446ac07d78b13957bb440105038411095032de444bf08e3bbdba8"},
        Data{Owner: "ID126859", File: "samplefile.pdf", FileChunkNumber: "6", SHA256: "e43b885c2bfb47130c54fa70528fb2a91d9d1af1417a0f7c5a4c22d8f16efb01"},

    for i := range data {
        _, dir := filepath.Split(filelocation)
        dir_1 := strings.Split(dir,"---")
        filechunk := dir_1[0]+"_"+ data[i].FileChunkNumber
        filechunklocation := filepath.Join(filelocation, filechunk)
        uploadlocation :=  data[i].Owner + "/" + dir + "/" + filechunk

        err := uploadGCS(data[i].Owner, filechunklocation, uploadlocation)

        if err != nil {
            return fmt.Errorf("Got an error %s", err.Error())

    for i, putdata := range data {
        dataAsBytes, _ := json.Marshal(putdata)
        err := ctx.GetStub().PutState("DATA"+strconv.Itoa(i), dataAsBytes)

        if err != nil {
            return fmt.Errorf("Failed to put to world state. %s", err.Error())

    return nil

// Uploads new data to the world state with given details
func (s *SmartContract) uploadData(ctx contractapi.TransactionContextInterface, dataID string, owner string, filelocation string, filechunknumber string) error {
    //Uploads the filechunk to the cloud storage
    _, dir := filepath.Split(filelocation)
    dir_1 := strings.Split(dir,"---")
    filechunk := dir_1[0]+"_"+ filechunknumber
    filechunklocation := filepath.Join(filelocation, filechunk)
    uploadlocation :=  owner + "/" + dir + "/" + filechunk
    err := uploadGCS(owner, filechunklocation, uploadlocation)
    if err != nil {
        return err

    //Creates SHA256 hash of the file chunk
    f, err := os.Open(filechunklocation)
    if err != nil {
    defer f.Close()
    h := sha256.New()
    if _, err := io.Copy(h, f); err != nil {

    data := Data{
        Owner: owner,
        File: dir_1[0]+"."+dir_1[1],
        FileChunkNumber: filechunknumber,
        SHA256: hex.EncodeToString(h.Sum(nil)),

    dataAsBytes, _ := json.Marshal(data)

    return ctx.GetStub().PutState(dataID, dataAsBytes)

func main() {

    chaincode, err := contractapi.NewChaincode(new(SmartContract))

    if err != nil {
        fmt.Printf("Error create cloud chaincode: %s", err.Error())

    if err := chaincode.Start(); err != nil {
        fmt.Printf("Error starting cloud chaincode: %s", err.Error())

This is something I got after executing this chaincode
terminal result

Go to Source
Author: Sai Madhav