Cookie not persisting on iOS devices after app has shut down

I’m having an issue with persisting cookies in iOS on React Native/Expo using Axios.

I have a Nest.js server that is using passport to authenticate users and when a user logs in a cookie is placed on the device for all subsequent requests. I’m handling this behavior with the header credentials: 'include' on each request. This is all working fine and well on Android and even on iOS until the user closes the app and reopens it (on iOS) and the cookie sent to the server is null. At first I thought this an issue with React Native so I decided to handle the cookie myself;

Axios interceptor which retrieves and stores the cookie in local storage:

axios.interceptors.response.use(async response => {
        const cookie: Array<string> = response.headers['set-cookie'];
        if (cookie) {
            const cookieHeader: Array<string> = setCookie.splitCookiesString(cookie);
            const cookies: setCookie.Cookie[] = setCookie.parse(cookieHeader);
            await Cache.saveCookie(cookies);
        }
        return response;
    }, async (error: AxiosError) => {
        // error handle
    });

I’m using set-cookie-parser package recommended here. My Cache module just saves the value to the devices storage using AsyncStorage.

Then my request interceptor which adds the cookie to each request (if set):

axios.interceptors.request.use(async config => {
        const cookies: setCookie.Cookie[] = await Cache.getCookie();
        if (cookies) {
            const cookie: string = cookies.map(d => `${d.name}=${d.value}`).join(';');
            return  {
                ...config,
                Cookie: cookie,
            };
        }

        return config;
    });

So my solution above works on Android and on iOS until the app is closed and reopened (on iOS, just like before).

Debugging on my server I can see the cookie in every request from the client but once the app is closed and reopened the cookie value in the request is null, but I have confirmed the cookie is retrieved and set in the interceptor above with no issue. So it’s almost like something else is tampering with my Http Request after my interceptor has added the cookie to the request.

As I’ve said before this only ever happens after the app has been closed and reopened. Happy to give more context about either the application code or the server code.

Thanks in advance.

Go to Source
Author: Harry Bendix-Lewis

ACF Date Based wp_query

I’ve been trying to order these exhibitions based on the end_date custom field created with Advanced Custom Fields. I’m not able to properly get this working. I need the most recent dates first. I need to also only get posts with the exhibition_status of ‘past’. For the life of me I can’t get this working and the below code is just the latest non working interation.

$args = array (
'post_type'         => 'exhibitions',
'meta_query'        => array(
    'relation'      => 'OR',
        'query_one'     => array(
            'key'       => 'exhibition_status',
            'value'     => 'past', // Optional
        ),
        'query_two'     => array(
            'key'       => 'end_date',
            'compare'   => '>=',
        ), 
),
'orderby'                => 'end_date',
'order'                  => 'ASC',
'post_status' => 'publish',
'posts_per_page' => 10,
'paged' => get_query_var( 'paged' ),

);

Go to Source
Author: smack-a-bro

Need HA in Mysql GTID based replication

I am using GTID based replication in mysql 5.7.I have few question

 1.How we can ensure here slave databases is consistent with master
   database ?
 2.How failover will work if master is down ?
 3.How to ensure High Availability in Mater/slave GTID replication ?````

Go to Source
Author: Uday Raj Gupta

Can’t get 4K on both monitors

I have 18.04 installed on a Mac Mini from (I think) 2015. I just installed two identical 4K monitors, one through the built-in HDMI port, the other through the built-in Thunderbolt port via a MDP->DP cable.

Both monitors are detected and seem to work. The one with the DP connection displays in 4K. The one with the HDMI connection has a maximal resolution of 1920×1080.

I’m a relative newbie and am not really sure how to start troubleshooting this. I’m not even certain if the problem is linux configuration or an intrinsic limitation in the hardware. Help? Thanks…

Go to Source
Author: grichard

using awk how can i format this data so i can have 14-0273 under ID , 06/28/2014 under date and for all other title

ID,Date,DateType,Age,Sex,Race,ResidenceCity,ResidenceCounty,ResidenceState,DeathCity,DeathCounty,Location,LocationifOther,DescriptionofInjury,InjuryPlace,InjuryCity,InjuryCounty,InjuryState,COD,OtherSignifican,Heroin,Cocaine,Fentanyl,FentanylAnalogue,Oxycodone,Oxymorphone,Ethanol,Hydrocodone,Benzodiazepine,Methadone,Amphet,Tramad,Morphine_NotHeroin,Hydromorphone,Other,OpiateNOS,AnyOpioid,MannerofDeath,DeathCityGeo,ResidenceCityGeo,InjuryCityGeo
14-0273,06/28/2014 12:00:00 AM,DateReported,,,,,,,,,,,substance,,,,,”Acute fent, hydrocod, benzodiazepine”,,,,Y,,,,,Y,Y,,,,,,,,,Accident,”CT
(41.575155, -72.738288)”,”CT
(41.575155, -72.738288)”,”CT
(41.575155, -72.738288)”
13-0102,03/21/2013 12:00:00 AM,DateofDeath,48,Male,Black,NORWALK,,,NORWALK,FAIRFIELD,Hospital,,,,,,,Cocaine Intoxication,,,Y,,,,,,,,,,,,,,,,Accident,”Norwalk, CT
(41.11805, -73.412906)”,”NORWALK, CT
(41.11805, -73.412906)”,”CT
(41.575155, -72.738288)”
16-0165,03/13/2016 12:00:00 AM,DateofDeath,30,Female,White,SANDY HOOK,FAIRFIELD,CT,DANBURY,,Hospital,,Substance Abuse,Unknown,UNKNOWN,,,Acute Heroin and Cocaine Intoxication,,Y,Y,,,,,,,,,,,,,,,Y,Accident,”Danbury, CT
(41.393666, -73.451539)”,”SANDY HOOK, CT
(41.419998, -73.282501)”,
16-0208,03/31/2016 12:00:00 AM,DateofDeath,23,Male,White,RYE,WESTCHESTER,NY,GREENWICH,,Hospital,,substance abuse,Residence,RYE,,,Acute Fentanyl and Morphine Intoxication,,Y,,Y,,,,,,,,,,,,,,Y,Accident,”Greenwich, CT
(41.026526, -73.628549)”,,
13-0052,02/13/2013 12:00:00 AM,DateofDeath,22,Male,”Asian, Other”,FLUSHING,QUEENS,,GREENWICH,FAIRFIELD,Hospital,,Transdermal Absorption,Other,,,,Fentanyl Intoxication,,,,Y,,,,,,,,,,,,,,,Accident,”Greenwich, CT
(41.026526, -73.628549)”,,”CT

Go to Source
Author: nish patel

VMWare ESXi error: Failed to deploy OVF package: Invalid configuration for device X

I know this has been addressed several times, but the solution usually involves using a tool other than what I’m using: ovftool. I have a customer that’s getting this error while trying to import an OVA file that we’ve put together, and I’m trying to figure out how to solve this issue just using non-windows tools (VMware Standalone Converter, etc).

According to https://kb.vmware.com/s/article/2152046, the issue is that ESXi 6.0 doesn’t like -1 values and the workaround is to edit the OVF descriptor to remove vmw:value="-1". However, I don’t have that in my VMX file. Here’s a copy of my VMX file:

.encoding = "UTF-8"
displayname = "RandomVM"
annotation = "Kali Rolling (2020.1) x64|0A2020-01-28|0A|0A- - - - - - - - - - - - - - - - - -|0A|0AUsername: kali|0APassword: kali|0A(US keyboard layout)|0A|0A- - - - - - - - - - - - - - - - - -|0A|0A* Kali Homepage:|0Ahttps://www.kali.org/|0A|0A* VM Homepage:|0Ahttps://www.offensive-security.com/kali-linux-vmware-virtualbox-image-download/|0A|0A* Documentation:|0Ahttps://www.kali.org/docs/|0A|0A* Forum/Support:|0Ahttps://forums.kali.org/|0A|0A* Kali Tools:|0Ahttps://tools.kali.org/|0A|0A* Bug Tracker:|0Ahttps://bugs.kali.org/|0A|0A* IRC Channel:|0Airc://irc.freenode.net/#Kali-Linux|0A|0A* Offsec IRC Channel:|0Airc://irc.freenode.net/#Offsec|0A|0A* Release Information:|0Ahttps://www.kali.org/kali-linux-releases/"
guestos = "otherlinux-64"
virtualhw.version = "15"
config.version = "8"
numvcpus = "4"
cpuid.coresPerSocket = "1"
memsize = "4068"
pciBridge0.present = "TRUE"
pciBridge4.present = "TRUE"
pciBridge4.virtualDev = "pcieRootPort"
pciBridge4.functions = "8"
pciBridge5.present = "TRUE"
pciBridge5.virtualDev = "pcieRootPort"
pciBridge5.functions = "8"
pciBridge6.present = "TRUE"
pciBridge6.virtualDev = "pcieRootPort"
pciBridge6.functions = "8"
pciBridge7.present = "TRUE"
pciBridge7.virtualDev = "pcieRootPort"
pciBridge7.functions = "8"
vmci0.present = "TRUE"
floppy0.present = "FALSE"
ide1:0.clientDevice = "FALSE"
ide1:0.present = "TRUE"
ide1:0.deviceType = "atapi-cdrom"
ide1:0.autodetect = "TRUE"
ide1:0.startConnected = "FALSE"
ide1:0.allowguestconnectioncontrol = "false"
svga.vramSize = "268435456"
usb.present = "TRUE"
ehci.present = "TRUE"
ethernet0.present = "TRUE"
ethernet0.virtualDev = "e1000"
ethernet0.connectionType = "nat"
ethernet0.startConnected = "TRUE"
ethernet0.addressType = "generated"
ethernet0.pciSlotNumber = "33"
ethernet0.wakeonpcktrcv = "true"
ethernet0.allowguestconnectioncontrol = "true"
scsi0:0.present = "TRUE"
scsi0:0.deviceType = "disk"
scsi0:0.fileName = "RandomVM-disk1.vmdk"
scsi0:0.allowguestconnectioncontrol = "false"
scsi0:0.mode = "persistent"
scsi0.virtualDev = "lsilogic"
scsi0.present = "TRUE"
vmci0.unrestricted = "false"
vcpu.hotadd = "true"
tools.syncTime = "true"
toolscripts.afterpoweron = "true"
toolscripts.afterresume = "true"
toolscripts.beforepoweroff = "true"
toolscripts.beforesuspend = "true"
powerType.powerOff = "soft"
powerType.reset = "soft"
powerType.suspend = "soft"
nvram = "RandomVM-file1.nvram"
virtualHW.productCompatibility = "hosted"
extendedConfigFile = "RandomVM.vmxf"
uuid.bios = "56 4d 3a cb 08 86 6c 65-30 c2 23 67 81 2a 24 03"
uuid.location = "56 4d 3a cb 08 86 6c 65-30 c2 23 67 81 2a 24 03"
scsi0:0.redo = ""
pciBridge0.pciSlotNumber = "17"
pciBridge4.pciSlotNumber = "21"
pciBridge5.pciSlotNumber = "22"
pciBridge6.pciSlotNumber = "23"
pciBridge7.pciSlotNumber = "24"
scsi0.pciSlotNumber = "16"
usb.pciSlotNumber = "32"
ehci.pciSlotNumber = "34"
vmci0.pciSlotNumber = "35"
vmotion.checkpointFBSize = "4194304"
vmotion.checkpointSVGAPrimarySize = "268435456"
ethernet0.generatedAddress = "00:0c:29:2a:24:03"
ethernet0.generatedAddressOffset = "0"
monitor.phys_bits_used = "43"
cleanShutdown = "TRUE"
softPowerOff = "TRUE"
usb:1.speed = "2"
usb:1.present = "TRUE"
usb:1.deviceType = "hub"
usb:1.port = "1"
usb:1.parent = "-1"
svga.guestBackedPrimaryAware = "TRUE"
usb:0.present = "TRUE"
usb:0.deviceType = "hid"
usb:0.port = "0"
usb:0.parent = "-1"
checkpoint.vmState = ""
guestOS.detailed.data = "bitness='64' distroName='Kali' distroVersion='2020.2' familyName='Linux' kernelVersion='5.5.0-kali2-amd64' prettyName='Kali GNU/Linux Rolling'"

The only values in here that are -1 are the following:

usb:1.parent = "-1"
usb:0.parent = "-1"

Given this, I don’t really know what “device 2” is from this, and I don’t know if simply removing those values would cause any additional errors. Unfortunately I don’t have an ESXi environment to try this on, but I also can’t keep going “can you try it again?” over and over.

My primary OS is macOS Catalina and so this is why I’m trying to use the ovftool in hopes that it’ll solve this problem. Here’s how I’m currently exporting the VM:

/Applications/VMware Fusion.app/Contents/Library/VMware OVF Tool/ovftool --maxVirtualHardwareVersion=07 --shaAlgorithm=SHA1 RandomVM.vmx RandomVM.ova

Any help in this area would be greatly appreciated.

Go to Source
Author: LewlSauce

SEO friendly way to link to authority identifiers from a page, without installing extensions?

I am looking for SEO-friendly ways to add authority identifiers to pages or categories so that SEO finds them easily, with without installing the Wikibase or Semantic MediaWiki or other extensions (would prefer not to go into why).

Example:
A page about a person may have many identifiers, e.g.:

  • LOC number (library of Congress)
  • Worldcat identifier
  • Biography database identifiers Who’s
  • Who identifier
  • BNF identifier
  • Wikidata website identifer

Most of these authority identifiers will be links but not all.

On Wikipedia these extensions are shown either in the infobox template at the top, or in the authority control template in the footer, both of which are linked to the relevant wikidata pages using the Wikibase extension.

SEO:
Authority identifiers are used by search engines to identify which content it is from other pages with the same name which may not be relevant, and many people have the same names. They can also link the page with other relevant content.

So what is the SEO-friendly way to include authority identifiers for SEO without installing extensions? I plan to use a template but plain links in a template in the footer seem unlikely to be crawled as effectively and I don’t want the Google snippet to be be full of identifiers from a template.

Go to Source
Author: Mousey

What happens if a sender changes the TCP window size over multiple packets that have the same ACK number?

I’m currently doing research on evasion attacks that seek to bypass a Deep-learning based Network Intrusion Detection System.

In order to achieve this, I need to know what the constraints are for the TCP window size field in the TCP packet header. Imagine a client has just sent the last TCP-ACK packet to a server in order to complete the 3-way handshake. He then immediately proceeds to send a GET request to the server (these 2 packets are thus sent one after the other, and contain the same ACK-number).

What happens if the TCP window size in the TCP-ACK packet does not match the window size in the TCP packet containing the GET request? Will the receiver simply observe the last value for the window size that he obtained? Or will there be a violation in the TCP protocol in any way? You can assume that the change in window size is very small, and will not cause the buffer to be full.

More generally, if the client sends N uninterrupted packets (e.g. a heavy-load POST request), can he change the window size in each packet header without repercussions?

Go to Source
Author: Gints Engelen

Gitlab CI push to docker-registry fails

I have GitLab server and a separate docker-registry. I have created a testuser with a testpassword.

My .gitlab-ci.yml

image: docker:latest
services:
  - docker:18.09.7-dind
    
variables:
  DOCKER_HOST: tcp://docker:2375/
  DOCKER_DRIVER: overlay2
    # See https://github.com/docker-library/docker/pull/166
  DOCKER_TLS_CERTDIR: ""
  REGISTRY: myregistry:5000/golang-test

stages:
  - build
 # - test
before_script:
  - docker login -u testuser -p testpassword  myregistry:5000

build_project:
    stage: build
    tags:
      - docker-ci
    script:
      - docker ps
      - docker build -t $REGISTRY .
      - docker push $REGISTRY

When CI pipeline runs, I get following error.

WARNING! Using –password via the CLI is insecure. Use
–password-stdin. Error response from daemon: Get https://myregistry:5000/v2/: dial tcp: lookup worker1 on
192.168.123.456:53: no such host ERROR: Job failed: exit code 1

concurrent = 1
check_interval = 0

[session_server]
  session_timeout = 1800

[[runners]]
  name = "docker-ci"
  url = "https://gitlab.sample.com:8000/"
  token = "89MfxRTby83ny3eTkTf2"
  tls-ca-file = "/etc/gitlab-runner/certs/gitlab.sample.com.crt"
  executor = "docker"
  pre_build_script = "export DOCKER_HOST=tcp://docker:2375"
  [runners.custom_build_dir]
  [runners.cache]
    [runners.cache.s3]
    [runners.cache.gcs]
  [runners.docker]
    tls_verify = false
    image = "ubuntu:18.04"
    privileged = true
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/cache"]
    shm_size = 0

This was after suggested modifications here.

I can successfully login to my docker-registry using docker login -u testuser -p testpassword https://myregistry:5000 from gitlab running server.

Go to Source
Author: Sachith Muhandiram

Call WCF service from ASP.NET core 3.1 API

I am getting below error when I am trying to call WCF service from ASP.Net Core, this is happening only when both wcf service and API deployed on IIS 10.

Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.g__Awaited|10_0(ControllerActionInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted) System.ServiceModel.CommunicationException: No connection could be made because the target machine actively refused it.
—> System.Net.Http.HttpRequestException: No connection could be made because the target machine actively refused it.
—> System.Net.Sockets.SocketException (10061): No connection could be made because the target machine actively refused it.
at System.Net.Http.ConnectHelper.ConnectAsync(String host, Int32 port, CancellationToken cancellationToken)

I am able to connect same WCF service from ASP.NET MVC controller. It fails only when we access from .net core.

Any thoughts here, will be greatly apricated.

Thanks,
Pani

Go to Source
Author: paniraj

Postgres PLPGSQL ‘un-stored’ procedures?

I’m getting the following error when I run the query below:

ERROR: query has no destination for result data Hint: If you want to
discard the results of a SELECT, use PERFORM instead. Where: PL/pgSQL
function inline_code_block line 12 at SQL statement

I don’t want to discard the results though! I want to display them. What’s happening here; is Postgres refusing to provide results from a block that isn’t a UDF or stored procedure? Or some other syntaxy thing?

DO
$$
declare 
    tenant text;
    result1 integer;
    result2 integer;
BEGIN

tenant='mycustomer1';
EXECUTE format('SELECT count(*) from ' || tenant || E'.accounts_user;') INTO result1;

tenant='mycustomer2';
EXECUTE format('SELECT count(*) from ' || tenant || E'.accounts_user;') INTO result2;

select result1, result2;

END;
$$ LANGUAGE plpgsql;

Background: I’m trying to create a set of dynamic queries that can get values from multiple PG schemas to return in a single report.

Context: I’m using Metabase to run the query, with a read only connection talking to RDS / Postgres. My plan is to store the queries as Metabase items. I was originally going to try creating a temp table to put my results in, but I’m not allowed to CREATE anything, including UDFs and stored procedures, hence my thinking of this as an un-stored procedure.

Go to Source
Author: Kyle Hodgson

How do I rename files multiple files by removing everything but numbers?

I want to rename files like:

SL Benfica vs. SC Beira-Mar 136.mp4
SL Benfica vs. SC Beira-Mar 137.mp4
SL Benfica vs. SC Beira-Mar 138.mp4
SL Benfica vs. SC Beira-Mar Jogo 074.mp4
SL Benfica vs. SC Beira-Mar Jogo 082.mp4
SL Benfica vs. SC Beira-Mar Jogo 112.mp4

But this

for f in *.mp4; do echo mv "$f" "${f//[^0-9]/}.mp4"; done

Adds a “4” at the end:

1364.mp4
1374.mp4
1384.mp4
0744.mp4
0824.mp4
1124.mp4

I think that it gets confused with the “4” in “mp4”. How can I solve this?

Go to Source
Author: mykahveli

Configuring HAProxy cache-store to only cache specific file types

I’m building a new load balancer and need to configure the cache to only store certain file types. At the moment, my config only wants to cache everything small enough to fit. I’ve tried setting up a variable based on the request and calling that in the cache-store statement, but based on the behavior of the service, it’s caching everything.

We configured a test box to pull from and included a delay in the response so that we’d be able to determine if the response was from the test server or the cache. We can also see the curls in the access log if it’s actually hitting the server. I’m not concerned that my test is faulty; I have clear proof when a request hits the server vs when it’s pulled from cache.

Here are the relevant portions of my config

frontend <redacted>
        bind <IP>:80
        bind <IP>:443 ssl crt <certificate.pem> ssl-min-ver TLSv1.2 alpn h2,http/1.1

        http-request set-var(req.path) hdr(Path) if { path_end .jpeg .jpg .png .gif .svg .ico .css .less .ttf .eot }

        acl cachedata path_end .jpg .jpeg .png .gif .svg .ico .css .less .ttf .eot

        http-request add-header X-Forwarded-Proto https if { ssl_fc }
        http-request cache-use lb if cachedata
        http-response cache-store lb if { var(req.path) -m end .jpg .jpeg .png .gif .svg .ico .css .less .ttf .eot }

        redirect scheme https code 301 if !{ ssl_fc }

        default_backend <redacted>

cache lb
        total-max-size 1024
        max-age 600

Go to Source
Author: Brad R