How can I ensure the client and server both have access to all API calls in use?

On multiple occasions, we’ve deployed frontend code to production only to find out the backend (REST or GraphQL) hasn’t shipped their side yet. Worse yet, we expectedly find out a param name changed which may throw an error. Another example: the backend removes an API thinking that clients no longer use the removed API and the frontend crashes. If any layer of communication between frontend and backend breaks down, then we may end up with catastrophic errors.

I think the “best solution” is to use a tool like Cypress or Codecept to create a suite of integration tests which checks every API call the frontend may use. Sadly, that’s a heavyweight solution that requires significant upfront investment in developer time.

Anyway, I’m looking for simple solution to this problem. Maybe something that checks affected APIs when the frontend opens a PR and/or something that checks the frontend repo when the backend deploys to production.

Any ideas or experience solving this problem?

Go to Source
Author: Brandon

Synchronize time among several concurrent users

I’m using Google Sheets to come up with a quick Jeopardy-style game with an improvised buzzer. All players type in the timestamp shortcut (Ctrl+Shift+0) into a cell and the lowest value is presumed to buzz in first, but we’ve realized that some people are a few seconds, even minutes, ahead (or behind). We’ve diagnosed that it’s related to each person’s time set in each’s OS.

Is there a way to sync times through the Sheets app? Or at the very least determine who input first?

I’ve tried to remedy with a series of nested array formula with if and count statements, but to no avail. Any help would be appreciated. Otherwise I’m going to set my OS’s time to 1899 and smoke everyone.

Go to Source
Author: Rudy A

what is an example of out of bounds read in order to leak sensitive information?

I Am trying to understand a little bit better behind the scenes on bypassing aslr by reading the bytes in the memory of a process, but how can I make an example of an info leak in WIN32? my code does the leaks of bytes , but how can I check the image base based on those bytes?

#include <stdio.h>
#include <string.h>

int main(int argc, char **argv) {
    char a[16];

    strncpy(a, "0123456789abcdef", sizeof(a));

    //... lots of code passes, functions are called...
    //... we finally come back to array a ...

    printf("%sn", a);
}

Go to Source
Author: pepe

Jinja syntax error: missing endif, in spite of providing it

I am using Salt 2019.2.0 version and below is my state file where I am using if condition to check the values and run states if it’s a match. Even if I have provided endif, it says Jinja looking for missing endif tag. Any help please?

CODE:

{% if salt['pillar.get']('spade:Corename') == 'FI' and salt['pillar.get']('spade:NodeType') == 'master' %}dataimport-script:file.managed:- name: /opt/bin/DataImport.py- source: salt://files/DataImport.py

dataimport-crontab:file.managed:- name: /etc/cron.d/solr-dataimport- contents: |# # set up data import for FI every 2 minutes*/2 * * * * root /usr/bin/python /opt/bin/DataImport.py

{% elif salt['pillar.get']('spade:NodeType') in ['slave','ds'] and salt['pillar.get']('spade:Corename') == 'FI' %}update-fi-solrconfig:file.replace:- name: {{ salt['pillar.get']('solr:home_dir') }}/data/{{ salt['pillar.get']('spade:Corename') }}/conf/solrconfig.xml- pattern: '"autoDeletePeriodSeconds">30'- repl: '"autoDeletePeriodSeconds">-1'{% endif %}

ERROR:


local:

Data failed to compile:

----------

Rendering SLS 'base:solr.install-solr' failed: Jinja syntax error: Unexpected end of template. Jinja was looking for the following tags: 'endif'. The innermost block that needs to be closed is 'if'.; line 205

---

[...]

update-fi-solrconfig:

file.replace:

- name: {{ salt['pillar.get']('solr:home_dir') }}/data/{{ salt['pillar.get']('spade:Corename') }}/conf/solrconfig.xml

- pattern: '"autoDeletePeriodSeconds">30'

- repl: '"autoDeletePeriodSeconds">-1'

{% endif %} <======================

Go to Source
Author: user22405

Multisite User Sync Question about blog authorship

I used to have a single wordpress site with a buddypress section (logged out visitors would see the blog only, but logged in users would see the buddypress area). I realised this was ineffective so i converted it to a multisite, where my “public” website was the root domain, and my “intranet” was at /team subdirectory. When I visit the network site admin, I see 190 users. When I visit the public website admin (which was the original site), I see 190 users. When I visit the /team subsite admin, I only see one user: me, the superadmin. Something looks odd, but I know that at the back-end, WP has only one users table so things should theoretically be fine.

  • The posts on my public website are written by people who register to be part of my intranet subsite.
  • The intranet should be the area where members can update their author bio, their profile image, etc. We also discuss new article drafts there.

My questions is: Should new members register on my main root site, or on the /team subsite. That is, should the “register/join our team” link point to http://main.com/register or http://main.com/team/register ? Does it make a difference? How do I configure the backend of my multisite so that there is only one blog, and all articles, drafts, etc are shared across my main site and my team site (for example, I want each member’s profile page on the team intranet to list all the posts that they have written for the public website).

Here is stuff that I have tried thus far (not working, and doesn’t answer my question above):

wp-config.php

define( 'WP_DEBUG', false );
define('WP_ALLOW_MULTISITE', true);
define('MULTISITE', true);
define('SUBDOMAIN_INSTALL', false);
define('DOMAIN_CURRENT_SITE', 'localhost');
define('PATH_CURRENT_SITE', '/wordpress2/');
define('SITE_ID_CURRENT_SITE', 1);
define('BLOG_ID_CURRENT_SITE', 1);
define ( 'BP_ROOT_BLOG', 2 ); // "2" is the team subsite $id

bp-custom.php

define ( 'BP_ENABLE_MULTIBLOG', true );
add_filter( 'bp_is_multiblog_mode', '__return_true' );

define ( 'BP_ROOT_BLOG', 3 );
add_filter( 'bp_get_root_blog_id', 'filter_root_blog_id' ); // return blog ID in

I also installed a plugin WordPress Multisite User Sync/Unsync but not sure if that solves my problem.

Go to Source
Author: Gaurav

Ubuntu transmits TTL 0 multicast packets

IP packets with TTL 0 shall not leave host.

But when I start application which multicasts UDP packets with TTL 0, I see packets with TTL 0 leaving host for few seconds, and coming to normal behavior of TTL 0. This most likely happens after reboot and first start of application.

I confirmed packets with TTL 0 leaving host with tcpdump:

05:31:39.048304 IP (tos 0x0, id 14487, offset 0, flags [DF], proto UDP (17), length 1344)
    192.168.1.200.46968 > 239.0.0.1.1234: UDP, length 1316
05:31:39.049594 IP (tos 0x0, id 14488, offset 0, flags [DF], proto UDP (17), length 1344)
    192.168.1.200.46968 > 239.0.0.1.1234: UDP, length 1316
05:31:39.051601 IP (tos 0x0, id 14489, offset 0, flags [DF], proto UDP (17), length 1344)
    192.168.1.200.46968 > 239.0.0.1.1234: UDP, length 1316
05:31:39.053584 IP (tos 0x0, id 14490, offset 0, flags [DF], proto UDP (17), length 1344)
    192.168.1.200.46968 > 239.0.0.1.1234: UDP, length 1316

As we can see ttl is not displayed which means TTL 0, as confirmed from tcpdump man page: https://www.tcpdump.org/manpages/tcpdump.1.html (search ttl, it clearly indicated: ttl is the time-to-live; it is not reported if it is zero).

There are no any iptables rules running.

uname -a: Linux mydevice 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

lsb_release -a:

No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 18.04.4 LTS
Release:        18.04
Codename:       bionic

What can be the cause for this behavior, and how can I resolve this?

Go to Source
Author: nullptr

Why watch time different in Youtube Analytics and Monetization page?

Recently my Youtube channel crossed 4k hour watch time in the last 365 days. I have 1301 subscribers and 4.4k hour watch time. Please see the attached image.

enter image description here

So, as per Youtube rules I can apply for monetization now. But in the monetization page, the subscriber count is 1290 and watch time is just 1661 hour. Why?

enter image description here

I waited 2 days thinking that it may take some time to update their databases, but nothing has changed. Why is there such a difference in Analytics and Monetization pages?

Go to Source
Author: LFS

Failure to build with docker-compose: Invalid signal: SIGQUIT

PS D:>docker-compose up
Building app
Step 1/7 : FROM php:7.4-fpm-alpine
—> 5ae58b49885c
Step 2/7 : RUN apk add --no-cache autoconf curl dpkg-dev dpkg freetype-dev file g++ gcc git icu-dev jpeg-dev libc-dev libmcrypt-dev libpng-dev libjpeg-turbo-dev libxml2-dev libzip-dev make mariadb-dev postgresql-dev pkgconf php7-dev re2c rsync unzip wget zlib-dev
ERROR: Service ‘app’ failed to build: Invalid signal: SIGQUIT

Version: 19.03.8

OS/Arch: windows/amd64

Experimental: true

Switching containers doesn’t help. What can I do or how to debug?

Thank you.

Go to Source
Author: dick-justice

Frontend/Backend Integration: Wait for a Google Cloud function to run and then return result

I am trying to create a tool that receives a user input on the front end, uses that input to run a Google Cloud function and then returns a link to the output, hosted on Google cloud as well.

How can you give updates that a process is being performed, and update the page when it is finished?

Go to Source
Author: Lance Johnson