How to add PPA source for this gcc-10 package?

I know that to add a PPA, we use:

add-apt-repository ppa:user/ppa-name

But for this package : gcc-10, what is user and ppa-name?

I have tried:

add-apt-repository ppa:ubuntu/gcc-10
add-apt-repository ppa:gcc-10
add-apt-repository ppa:focal/gcc-10

They all fails with:

user or team does not exist.

Go to Source
Author: kilasuelika

Why do I have a directory called .ecryptfs full of files when I do not have my file system encrypted?

(Revisiting the eCryptfs headers errors post from 2015)

In my /var/log/syslog file, I was seeing the error

Valid eCryptfs headers not found in file header region or xattr region, inode 7080159

I used @GiovanniToraldo’s suggestion

find / -inum 7080159

and found the errant file in the /home/.ecryptfs/<user>/.Private/... directory. The file size was 8192 and the file date was July 10, 2019 (last year.) So I deleted the file.

Why do I have a directory for .ecryptfs when I do not have my file system encrypted? Can I just delete the entire .ecryptfs directory?

Note: I am running Ubuntu 14.04 LTS on a desktop computer with a RS880 [Radeon HD 4250] video card that is no longer supported by the Linux kernel beyond 14.04 LTS, so I cannot upgrade Ubuntu without buying a new computer.

Go to Source
Author: Stephen Daddona

Can you edit /etc/passwd on a firmware image?

I’m trying to gain root access on a cheap DVR that I bought for a CCTV system. The manufacturer has a firmware update image available for download that I was able to inspect using a tool called binwalk and I extracted the rootfs. It looks like a flavor of embedded Linux with Busybox.

I took a look at the /etc/passwd file and it has one line for root with a password hash and a login shell defined. There’s no /etc/shadow file but there is a /etc/passwd- which I haven’t seen before.

I also found an init script that would normally be launching telnetd but it’s commented out.

Can I just generate a new password hash to substitute in /etc/passwd, uncomment the telnetd line and then log in as root with the new password through telnet over the network?

(Of course this is ignoring the possible problems I could run into with flashing the new firmware onto the device and bricking it)

Go to Source
Author: jvergeldedios

Try to run Python programm from command line. ImportError: No module named mysql.connector

I’m new in programming. So I have a question about this error.
I’m trying to run programm from command line in PyCharm, and get this Error.

root@kali:~# python /root/PycharmProjects/lesson/ProL3/ner.py
Traceback (most recent call last):
File “/root/PycharmProjects/lesson/ProL3/ner.py”, line 2, in
import mysql.connector as sql
ImportError: No module named mysql.connector

Prog:

import mysql.connector as sql
from mysql.connector import Error
import argparse


connection = sql.connect(
    host="localhost",
    password="mroot",
    user="myro"
)
mycursor = connection.cursor()

parser = argparse.ArgumentParser(description='Database management', usage='Script option:')
parser.add_argument('-d', '--data', help="Create Databases")
args = parser.parse_args()

class Data:
    def __init__(self, database=''):
        self.database = database
        self.create_database()

    def create_database(self):
        try:
            mycursor.execute(self.database)
            print("The database was created!")
        except Error as e:
            print(e)


What should I do?

Go to Source
Author: Boboc

How can I ensure the client and server both have access to all API calls in use?

On multiple occasions, we’ve deployed frontend code to production only to find out the backend (REST or GraphQL) hasn’t shipped their side yet. Worse yet, we expectedly find out a param name changed which may throw an error. Another example: the backend removes an API thinking that clients no longer use the removed API and the frontend crashes. If any layer of communication between frontend and backend breaks down, then we may end up with catastrophic errors.

I think the “best solution” is to use a tool like Cypress or Codecept to create a suite of integration tests which checks every API call the frontend may use. Sadly, that’s a heavyweight solution that requires significant upfront investment in developer time.

Anyway, I’m looking for simple solution to this problem. Maybe something that checks affected APIs when the frontend opens a PR and/or something that checks the frontend repo when the backend deploys to production.

Any ideas or experience solving this problem?

Go to Source
Author: Brandon

Synchronize time among several concurrent users

I’m using Google Sheets to come up with a quick Jeopardy-style game with an improvised buzzer. All players type in the timestamp shortcut (Ctrl+Shift+0) into a cell and the lowest value is presumed to buzz in first, but we’ve realized that some people are a few seconds, even minutes, ahead (or behind). We’ve diagnosed that it’s related to each person’s time set in each’s OS.

Is there a way to sync times through the Sheets app? Or at the very least determine who input first?

I’ve tried to remedy with a series of nested array formula with if and count statements, but to no avail. Any help would be appreciated. Otherwise I’m going to set my OS’s time to 1899 and smoke everyone.

Go to Source
Author: Rudy A

what is an example of out of bounds read in order to leak sensitive information?

I Am trying to understand a little bit better behind the scenes on bypassing aslr by reading the bytes in the memory of a process, but how can I make an example of an info leak in WIN32? my code does the leaks of bytes , but how can I check the image base based on those bytes?

#include <stdio.h>
#include <string.h>

int main(int argc, char **argv) {
    char a[16];

    strncpy(a, "0123456789abcdef", sizeof(a));

    //... lots of code passes, functions are called...
    //... we finally come back to array a ...

    printf("%sn", a);
}

Go to Source
Author: pepe

Jinja syntax error: missing endif, in spite of providing it

I am using Salt 2019.2.0 version and below is my state file where I am using if condition to check the values and run states if it’s a match. Even if I have provided endif, it says Jinja looking for missing endif tag. Any help please?

CODE:

{% if salt['pillar.get']('spade:Corename') == 'FI' and salt['pillar.get']('spade:NodeType') == 'master' %}dataimport-script:file.managed:- name: /opt/bin/DataImport.py- source: salt://files/DataImport.py

dataimport-crontab:file.managed:- name: /etc/cron.d/solr-dataimport- contents: |# # set up data import for FI every 2 minutes*/2 * * * * root /usr/bin/python /opt/bin/DataImport.py

{% elif salt['pillar.get']('spade:NodeType') in ['slave','ds'] and salt['pillar.get']('spade:Corename') == 'FI' %}update-fi-solrconfig:file.replace:- name: {{ salt['pillar.get']('solr:home_dir') }}/data/{{ salt['pillar.get']('spade:Corename') }}/conf/solrconfig.xml- pattern: '"autoDeletePeriodSeconds">30'- repl: '"autoDeletePeriodSeconds">-1'{% endif %}

ERROR:


local:

Data failed to compile:

----------

Rendering SLS 'base:solr.install-solr' failed: Jinja syntax error: Unexpected end of template. Jinja was looking for the following tags: 'endif'. The innermost block that needs to be closed is 'if'.; line 205

---

[...]

update-fi-solrconfig:

file.replace:

- name: {{ salt['pillar.get']('solr:home_dir') }}/data/{{ salt['pillar.get']('spade:Corename') }}/conf/solrconfig.xml

- pattern: '"autoDeletePeriodSeconds">30'

- repl: '"autoDeletePeriodSeconds">-1'

{% endif %} <======================

Go to Source
Author: user22405

Multisite User Sync Question about blog authorship

I used to have a single wordpress site with a buddypress section (logged out visitors would see the blog only, but logged in users would see the buddypress area). I realised this was ineffective so i converted it to a multisite, where my “public” website was the root domain, and my “intranet” was at /team subdirectory. When I visit the network site admin, I see 190 users. When I visit the public website admin (which was the original site), I see 190 users. When I visit the /team subsite admin, I only see one user: me, the superadmin. Something looks odd, but I know that at the back-end, WP has only one users table so things should theoretically be fine.

  • The posts on my public website are written by people who register to be part of my intranet subsite.
  • The intranet should be the area where members can update their author bio, their profile image, etc. We also discuss new article drafts there.

My questions is: Should new members register on my main root site, or on the /team subsite. That is, should the “register/join our team” link point to http://main.com/register or http://main.com/team/register ? Does it make a difference? How do I configure the backend of my multisite so that there is only one blog, and all articles, drafts, etc are shared across my main site and my team site (for example, I want each member’s profile page on the team intranet to list all the posts that they have written for the public website).

Here is stuff that I have tried thus far (not working, and doesn’t answer my question above):

wp-config.php

define( 'WP_DEBUG', false );
define('WP_ALLOW_MULTISITE', true);
define('MULTISITE', true);
define('SUBDOMAIN_INSTALL', false);
define('DOMAIN_CURRENT_SITE', 'localhost');
define('PATH_CURRENT_SITE', '/wordpress2/');
define('SITE_ID_CURRENT_SITE', 1);
define('BLOG_ID_CURRENT_SITE', 1);
define ( 'BP_ROOT_BLOG', 2 ); // "2" is the team subsite $id

bp-custom.php

define ( 'BP_ENABLE_MULTIBLOG', true );
add_filter( 'bp_is_multiblog_mode', '__return_true' );

define ( 'BP_ROOT_BLOG', 3 );
add_filter( 'bp_get_root_blog_id', 'filter_root_blog_id' ); // return blog ID in

I also installed a plugin WordPress Multisite User Sync/Unsync but not sure if that solves my problem.

Go to Source
Author: Gaurav

Ubuntu transmits TTL 0 multicast packets

IP packets with TTL 0 shall not leave host.

But when I start application which multicasts UDP packets with TTL 0, I see packets with TTL 0 leaving host for few seconds, and coming to normal behavior of TTL 0. This most likely happens after reboot and first start of application.

I confirmed packets with TTL 0 leaving host with tcpdump:

05:31:39.048304 IP (tos 0x0, id 14487, offset 0, flags [DF], proto UDP (17), length 1344)
    192.168.1.200.46968 > 239.0.0.1.1234: UDP, length 1316
05:31:39.049594 IP (tos 0x0, id 14488, offset 0, flags [DF], proto UDP (17), length 1344)
    192.168.1.200.46968 > 239.0.0.1.1234: UDP, length 1316
05:31:39.051601 IP (tos 0x0, id 14489, offset 0, flags [DF], proto UDP (17), length 1344)
    192.168.1.200.46968 > 239.0.0.1.1234: UDP, length 1316
05:31:39.053584 IP (tos 0x0, id 14490, offset 0, flags [DF], proto UDP (17), length 1344)
    192.168.1.200.46968 > 239.0.0.1.1234: UDP, length 1316

As we can see ttl is not displayed which means TTL 0, as confirmed from tcpdump man page: https://www.tcpdump.org/manpages/tcpdump.1.html (search ttl, it clearly indicated: ttl is the time-to-live; it is not reported if it is zero).

There are no any iptables rules running.

uname -a: Linux mydevice 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

lsb_release -a:

No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 18.04.4 LTS
Release:        18.04
Codename:       bionic

What can be the cause for this behavior, and how can I resolve this?

Go to Source
Author: nullptr

Why watch time different in Youtube Analytics and Monetization page?

Recently my Youtube channel crossed 4k hour watch time in the last 365 days. I have 1301 subscribers and 4.4k hour watch time. Please see the attached image.

enter image description here

So, as per Youtube rules I can apply for monetization now. But in the monetization page, the subscriber count is 1290 and watch time is just 1661 hour. Why?

enter image description here

I waited 2 days thinking that it may take some time to update their databases, but nothing has changed. Why is there such a difference in Analytics and Monetization pages?

Go to Source
Author: LFS