Facebook does not see open graph tags in angular app

I have SPA built with angular. I am settings open graph tags via angular meta service in js when rest api request with data is completed – https://angular.io/api/platform-browser/Meta
I do not use ssr

facebook debugger – https://developers.facebook.com/tools/debug – does not see tags, but it is present in head.
I am not allowed to share website urls.

Maybe the reason is that Facebook crawler is not able to execute js. I have not found that explicitly stated in Facebook docs. If it is the reason, how to overcome it?

Go to Source
Author: Bohdan Zhmud

Connecting to an OpenVPN Server

I’m still very new to understanding all this VPN / Server stuff so please bear in mind when helping.

I am trying to connect to an OpenVPN Server set up by my client.
I am wondering how to do I connect as a client to this server via the Ubuntu terminal?
All they have provided to me is the key and the WAN and LAN of their network.

Thank you.

Go to Source
Author: Louise Finch

Why does rundeck job fails entierly when the first node fails?

I’ve created a job in rundeck that is planned to run on all the nodes in a datacenter (>300). This job is expected to fail on some of them, so I ticked the box that says “if a node fails:” to “Continue running on any remaining nodes before failing the step”. I’ve also set the thread count to 10 to slow things down a bit, in case we’ll need to abort.

However, I’m running the job. It runs for a while as expected (10 in parallel, and when a node fails or completes a new node starts), but then when the first node to run fails, the entire job fails and stops.

What am I missing?

Go to Source
Author: Tom Klino

Scaling out MySQL & Redundancy-Speed tradeoff?

I’m building an e-commerce service for a group of sellers. They have a common HQ who manufactures their product.

Tables:

  1. order (id, seller_id, timestamp)
  2. order_products (order_id, product_id, seller_id, timestamp, pincode)
  3. transaction (id, seller_id, timestamp)
  4. transaction_products (transaction_id, product_id, seller_id, timestamp, pincode)
  5. seller (id, pincode, name)
  6. product(id, price)

Specifications:

  1. There are 100 sellers
  2. Each vendor performs 500 transactions per day
  3. Each transaction has 4 products associated with it
  4. Each Vendor places two orders per day to HQ
  5. Each order have 50 products

HQ Requirements:

  1. How many products were sold by which seller in a given month
  2. How many products were sold in a given pincode in a given month
  3. Orders placed by all sellers in a given month

Seller Requirements:

  1. View cost of order placed by him/her (the seller)
  2. View his/her sales of a given month

The product is ready and application works just fine. But, I’m concerned with the two things.

  1. Scaling: Being really new, I don’t know much about scaling out or sharding or clustering. How much time have I got until I can keep these aside?
  2. Redundancy: As you can see in transaction_product & order_product, I’ve reused columns from transaction & order, respectively. The redundant columns are: timestamp, seller_id, pincode. My idea was to avoid joins. But I’m not sure if joins would be more expensive than current redundancy. Can anyone point me in the current direction?

Go to Source
Author: Koushik Shom Choudhury

Load single module based on configuration using dependency injection

I’m working on an application that will run on multiple systems and may use different modules to communicate with external systems, but on each system, only one module will be used at a time. As it should be possible to change the used module on a specific system, the same application with all modules should be deployed to all systems. For simplicity, let’s assume that there are two modules called Foo and Bar.

Both modules have their own module descriptor that registers the module components to the dependency injection container:

public class FooModule : IModule
{
    public void Configure(IServiceCollection services)
    {
        services.AddTransient<IService, FooService>();
        // Register dependencies of FooService
    }
}

I know that Autofac supports modules out of the box (even with support for configuration) and there are several libraries that add such a feature to Microsoft.Extensions.DependencyInjection, but I want to ask this question with a general look at the concept of dependency injection.

If the services of all modules should be used at the same time, I would be done. Given they implement the same service, I could inject them using IEnumerable<IService>. But in my use case, there is a component that requires a single IService and I want to select the implementation based on a configuration (e.g. from a file).

Now there are several approaches where to apply that configuration and I’m not sure which one should be preferred:

1st approach – load single assembly

I could read the configuration and then load only the external assembly that contains the IModule that should be used. This would require the introduction of some “magic” link between the configuration values and the names of the module assemblies, as the core application should not know the extension modules beforehand.

2nd approach – call single module

All the assemblies are loaded, but using a link between the configuration values and the names of the module classes (or namespaces), only the one module that should be used will be called to setup the passed IServiceCollection.

3rd approach – let the module decide

The modules decide on their own if they are configured and therefor should provide their service implementation. Instead of evaluating the configuration in the core application, the configuration gets passed to the modules:

public class FooModule : IModule
{
    public void Configure(IServiceCollection services, IConfiguration configuration)
    {
        if (configuration.GetSection("foo").Exists())
        {
            services.AddTransient<IService, FooService>();
        }
    }
}

4th approach – use some DI container feature

I know that Autofac or other DI containers support named / keyed service registrations that would basically solve this problem and therefor answer this question for me. However, I guess there is a reason why other DI containers like Microsoft.Extensions.DependencyInjection do not provide this feature.


What approach would you prefer? Is there an approach or some variation that I missed? Is there anything else I should keep in mind when designing a system in that way?

Go to Source
Author: Lukas K├Ârfer

Show products per page drop down

So i have a wordpress woocommerce site and I would like to add a button under the pagenumbers (do you call this pagination?) so the user can select himself how many products to view per page, ie 20, 40, 80 etc.

I would like it to be at the bottom under the page numbers, I have tried a solution that leaves the drop down at the top, near where it says “Showing products 1-20..” not quite where I want it.

Any ideas? point me to a thread or tutorial on this?

Thanks for reading!

Mike

example i like https://www.funko.com/shop (see bottom)

Go to Source
Author: Mikebb

expect script execute and then detach from child process

I am using expect script to switch to sudo user and then run some custom commands. I am using zsh + Tilix terminal.

#!/usr/bin/expect

set timeout 60
spawn sudo su
expect ": "
sleep .5
send "passwordr"
expect "u0007"
sleep .5
send "cd /path/to/a/folder && source some_scriptr"
interact

As I understand, everything I type after interact instruction runs as a child process of the expect process. This causes some weird autocompletion problem in zsh configuration while copy-pasting a long command. The history navigation with up or down arrow keys causes the command overlapping e.g. when I type the command

echo "Wikipedia is a multilingual online encyclopedia created and maintained as an open collaboration project by a community of volunteer editors using a wiki-based editing system." 

and pressed the up-arrow key, the resulting command looks like
enter image description here

The above problem does not occur when I manually enter the expect script commands.

So, Is there any way to execute expect script with switching user and detach from expect process completely? Or any idea on resolving the above problem?

Go to Source
Author: Mohit Kumar Jangid

Hide “What’s Happening” and “Who To Follow” on Twitter

Because the divs have random hash classes I can’t figure out how to hide them or what to hide them with. These sections take up an insane amount of space and the content is terrible, would love to hide them.

I tried div[aria-label="Timeline: Trending now"] {display:none;} with Stylish but either the selector or chrome extension didn’t work.

Go to Source
Author: stephd

Jenkins – docker login doesn’t seem to persist: docker pull won’t work but docker-compose can pull without problems

I am setting up a new Jenkins as our old one is based on a deprecated base image.

Dockerfile is:

FROM jenkins/jenkins:lts-centos7

USER root

RUN yum -y install docker 
    && yum clean all

USER jenkins

When I docker exec ... into the container and log into our Sonatype Nexus to pull Docker images from it, the successful login does not seem to be heeded by Jenkins afterwards:

[root@909fb3a3d52c .docker]# docker login https://our.nexus.internal:<endpoint>
Username (jenkins): jenkins
Password:
Login Succeeded
[root@909fb3a3d52c .docker]# pwd
/root/.docker
[root@909fb3a3d52c .docker]# cat config.json
{
    "auths": {
        "https://our.nexus.internal:<endpoint>": {
            "auth": "<CORRECT base64 hash>"
        }
    }

Login has succeeded, the auth credentials are written to /root/.docker/config.json and when I base64-decode them they are correct “jenkins:”.

However:

[root@909fb3a3d52c .docker]# docker pull https://our.nexus.internal:<endpoint>/myImage:myTag
Error response from daemon: Get https://our.nexus.internal:<endpoint>/v2/myImage/manifests/myTag: no basic auth credentials

This will also happen during execution of a Jenkins pipeline (obviously), where job log reports:

docker pull https://our.nexus.internal:<endpoint>/myImage:myTag

Error response from daemon: Get Error response from daemon: Get https://our.nexus.internal:<endpoint>/v2/myImage/manifests/myTag: no basic auth credentials

script returned exit code 1

I assume this to not be a question about nexus but about how the credentials from docker login are stored and used.

Sidenote: We have a different Jenkins currently operational which is based on jenkinsci/blueocean:1.22.0 which does NOT display this behaviour. It can docker login fine and docker pull fine.

edit1:

Running docker-compose does work while a manual docker pull... will not:

[root@3eeee032ff08 /]# docker login https://nexus:port
Username: jenkins
Password:
Login Succeeded

[root@3eeee032ff08 /]# docker pull nexus:port/company/image:myTag
Error response from daemon: Get https://nexus:port/v2/company/image/manifests/myTag: no basic auth credentials

[root@3eeee032ff08 /]# cat docker-compose.yml
version: '3.3'

services:
jenkins:
    user: root
    image: nexus:port/company/image:myTag

[root@3eeee032ff08 /]# docker-compose up

Creating network "default_default" with the default driver
Pulling jenkins (nexus:port/company/image:myTag)...
myTag: Pulling from company/image
f34b00c7da20: Pull complete
3f316190de62: Pull complete
480967924aca: Pull complete
619ac94672e0: Pull complete
cb1c6713a236: Pull complete
55766a17b1c6: Pull complete
8f6e881b2ef2: Pull complete
f118a721eee8: Pull complete
fae3c8e02b83: Pull complete
093ceb59408d: Pull complete
e3bd3cbf474d: Pull complete
41a6f4e5de6d: Pull complete
aa1aece4c754: Pull complete
fc897d8ca22f: Pull complete
Digest: sha256:3fd74169f648784fe960add9c0138e3802a91e82c9b5fdbd734faaeaaef696f9
Status: Downloaded newer image for nexus:port/company/image:myTag

Go to Source
Author: Worp

Why is Ubuntu 20.04 not registering ethernet on my Raspberry Pi?

When the Raspberry Pi boots up, Ubuntu 20.04 says the following:
Net: No ethernet found.
starting USB…
No working controllers found

It continues to boot up, but this means that I can not install a desktop environment on it. The ethernet port and cable work fine, and the Raspberry Pi ethernet jack lights up when the cable is plugged in. How do I get ethernet to be recognized?
Thanks!

Go to Source
Author: techyguy2

Julia implementation of Louvain algorithm

I am trying to implement the Louvain Algorithm in Julia.

The paper describes the modularity gain as:

Modularity gain formula

Where Sum_in is the sum of the weights of the links inside C, Sum_tot is the sum of the weights if the links incident to nodes in C, K_i is the sum of the weights if the links incident to node i, K_i_in is the sum of the weights of the links from i to nodes in C, and m is the sum of the weights of all the links in the network.

My implementation is:

function linksIn(graph, communities, c)::Float32
    reduce(+,
        map(
            e-> (communities[e.src] == c && communities[e.dst] == c)
                ? e.weight
                : 0
            , edges(graph)
        )
    )
end

function linksTot(graph, communities, c)::Float32
    reduce(+,
        map(
            e-> (communities[e.src] == c || communities[e.dst] == c)
                ? e.weight
                : 0
            , edges(graph)
        )
    )
end

function weightsIncident(graph, node)::Float32
    reduce(+,
        map(
            n-> get_weight(graph, node, n)
            , neighbors(graph, node)
        )
    )
end

function weightsIncidentComunity(graph,communities, node, c)::Float32
    reduce(+,
        map(
            n-> (c == communities[n])
                ? get_weight(graph, node, n)
                : 0
            , neighbors(graph, node)
        )
    )
end

function modulGain(graph, communities, node, c)::Float32

    # Calculate the variables of the modularity gain equation
    wIn = linksIn(graph, communities, c);
    wTot = linksTot(graph, communities, c);
    k = weightsIncident(graph, node);
    k_com = weightsIncidentComunity(graph, communities, node, c);
    m = reduce(+, map(e->e.weight, edges(graph)));


    # return the result of the modularity gain equation

    return ((wIn +k_com) / (2*m) - ((wTot+k)/(2m))^2 )
        - ((wIn/(2m)) - (wTot/(2m))^2 - (k/(2m))^2 )
end

If I compare the results of the funcion modulGain the the difference in modularity I get the following examples for the first pass where each node is in its own comunity in this graph.

  • modulGain(graph, communities, 1, 1) -> 0.00010885417
  • modulDifference(graph, communities, 1, 1) -> 0.0

and

  • modulGain(graph, communities, 1, 3) -> 4.806646e-5
  • modulDifference(graph, communities, 1, 3) -> 5.51432459e-5

When running the algorithm using the Modularity Gain equation it tends to get stuck in an infinite loop.
And I want to avoid to use the modularity difference since there is a clear performance improvement when using the Modularity Gain Equation.

Can someone explain me what is wrong with my implementation?
Thank you.

Go to Source
Author: D. Saby

CentOS 8: Network Card disconnected on boot

Having an odd issue, just installed RockStore on a computer (CentOS based) and it is all running fine, but when the system boots the NIC is disabled.

I run nmcli device status and I can see the device is “disconnected”

If I then run nmcli con up DEVICEID the ethernet device connects fine, but if I reboot the system, the NIC is disabled again…

How can I force the card to connect automatically on boot?

Many thanks,

Mo

Go to Source
Author: PsychoMo

Can I store certificates in the personal store of a virtual service account?

We’d like to make use of virtual accounts (https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd548356(v=ws.10)?redirectedfrom=MSDN#using-virtual-accounts) to run some of our applications on our own servers.

It seems easy enough to do, however: some of our applications need access to certain certificates in order to communicate with other (remote) services.

I’ve installing certificates to the user store for a virtual account, and I’ve also tried granting private key access to a cert in the local machine account, both of which seem to have failed.

Is what I’m trying to do impossible?

Go to Source
Author: Richiban

Decoupling and Command Pattern

I am working on some project to learn how to make bigger and better software (multiplayer game) and I found a problem about having in my code a design pattern Command Pattern and anti-pattern God Object. Often I do end up win the latter one and I know that Fascades are okay, but my understanding of something being right and wrong in OOP is very blurry.

I’ve implemented the Command Pattern. Shortly, my command:

public interface IGameCommand : ICommand
{
    bool Execute(Game game);
}

And an executor

public interface IExecutor<TState, TCommand>
{
    void Execute(TCommand command);
}

Let’s say that I have a command that does a lot: modifies some data, plays sound etc.

So, in my case, this should look like this:

public class MagicSpell: IGameCommand
{
    int x; int y; int damage; string soundClipName; string effectName;
    bool Execute(Game game)
    {
         game.gameState.map[x][y].unit.TakeDamage(damage);
         ...
         game.soundPlayer.PlaySound(soundClipName);
         ...
         game.specialEffectPlayer.PlayEffect(effectName);
         ...
    }
}

As you can see, this forces my Game class to become a God object… or doesn’t it? Since my Game class contains specialized classes that do their thing I am fulfilling the Single responsibility principle. I have some bad experience with ending up with a God Object, but is my solution viable and acceptable with OOP?

Or maybe is something wrong with my Command Pattern implementation and Execute(Game game)?

Maybe making some specialized commands would help?

Go to Source
Author: Clockworker