WorPress Editor Won’t Load Content Anymore

Got this weird WordPress behaviour recently. While most pages load successfully, editing something – like a post or a page – will draw a blank browser tab. Zero content whatsoever. It has the same result for both Firefox and Chrome. Not tried on other browsers.

ANSWER

Uncaught EvalError: Refused to evaluate a string as JavaScript because 'unsafe-eval' is not an allowed source of script in the following Content Security Policy directive: "default-src 'unsafe-inline'

This was the error that I got when I went into the page, opened developer tools, then reloaded the page.

Caused by additional custom Header policy on the web server of the website. Looks something like this:

Header set Content-Security-Policy "default-src 'unsafe-inline' 

Add ‘unsafe-eval’ to go around it. Use of eval() is usually considered bad practice and not secure hence it’s getting blocked until it is allowed by adding it into the policy. Now the Header will look like this after modifying it. Don’t forget to restart the web server.

Header set Content-Security-Policy "default-src 'unsafe-inline' 

More detailed explanations can be had here: https://scotthelme.co.uk/content-security-policy-an-introduction/

gnuplot: Can you set shell variables in a script and then later call them?

I want to be able to issue shell commands from a gnuplot script, such as setting a variable, but it appears that the system command spawns a new shell instance as opposed to issuing commands to the shell instance which is running the gnuplot script. See the following script where the first line allows assignment of $foo, but the second line cannot access that variable. In this case, $foo is assigned as an arbitrary string as opposed to a reference to the directory, hence the "

#!/usr/bin/gnuplot -p
system "export foo="$HOME/path/to/dir";echo $foo"
system "echo $foo"
set datafile separator "t"
#plot "`echo $foo`/bar.dat" using 2:3
plot "<( sed '5p' $foo/bar.dat )" using 2:3

Go to Source
Author: T. Zack Crawford

How to prevent WordPress from redirecting single page to prefixed category archive with same name?

I have been trying to solve this problem for hours. I would like to have a page with a simple slug (/blog) and a category term with the same name (/category/blog) exist on my site together at the same time. But the problem I am running into is that my page (/blog) keeps redirecting to the category archive page (/category/blog).

I have tried the following:

  • adjusting my permalinks by clearly specifying my category base
  • using remove_action(‘template_redirect’, ‘redirect_canonical’); to suppress redirects
  • using add_filter( ‘template_redirect’, ‘blog_redirect_filter’, 10, 3 ); to try and target my blog category specifically

I am running out of ideas. How can I stop my site from redirecting an existing page to a category archive when they each have different slugs?

EDIT

I figured it out! Classic wait-until-you-post-publicly-before-figuring-it-out-yourself problem. It was a fun mix of Yoast, custom permalinks, and category bases all conflicting with each other.

My answer:

  • Make sure your category base is specified in Permalinks. Save them.
  • Remove any custom permalink slugs in the category terms.
  • Go to Yoast > Search Appearance > Taxonomies and scroll to the very bottom. Make sure Category URLs is set to Keep.
  • Clear any cache and/or flush_rewrite_rules.
  • Voila it’s fixed.

Go to Source
Author: Beth Mac

Is it possible to add a Slack call to a Google Calendar event?

When I invite a co-worker to meet with me, I often want to meet via a Slack call. It is easy to add a link to the calendar invite for a Zoom call, a Hangouts call, or even a Jitsi call (we use Google Calendar and there are integrations for all three). I am wondering if there is some way to achieve the same convenient “hop on the call in one click” when using Slack.

Go to Source
Author: lucas

Is adding encryption before hashing more secure?

One way to secure a password in the database is to hash it with salt i.e. appending a random string to the password and then hash it and store the value.

Does encrypting the password || salt then hashing it make it more secure? Is it really necessary (or can I skip the encryption part)?

Go to Source
Author: xcoder

Query Product categories what have a custom field

Im trying to build a page template on wordpress that will show a list a categories that have a custom field checked.

So the taxonomy is just the default woocommerce taxonomy product_cat I then have a custom field to the product_cat taxonomy which is a check box called collectable

My set up is
Main Category

  • Child
  • Child
  • Child
  • Child

So for example 2 children categories have the tick box collectable which is set to add the value of 1 into the datbase

So I am doing a page where it will show all categories with the collectable checked.

$args = array(
    'post-type' => 'product',
    'taxonomy' => 'product_cat',
    'hide_empty' => 0
  );
  $c = get_categories($args);
  $c_keep = array();
  foreach($c as  $cat){
 if (get_field('collectable', 'category_'.$cat->term_id)) {
      $c_keep[] = $cat;
    }  
  }
  
  foreach($c_keep as $cat){
    echo $cat->name;
  }

But I am getting nothing at all returned.

I even put a

print_r( $args );

But I am still coming up with a blank (Header and footer Loads and the text above the query)

Can anyone help please

Go to Source
Author: user1348927

How can I login to MySQL as 127.0.0.1 or ::1 instead of localhost?

I’ve set different password for 'root'@'localhost' and 'root'@'127.0.01' like this:

mysql -u root
SET PASSWORD FOR 'root'@'localhost' = PASSWORD('pswd1');
SET PASSWORD FOR 'root'@'127.0.0.1' = PASSWORD('pswd2');

but when I try to login like this:

mysql -u root -p -h 127.0.0.1

I still have to use the first password pswd1 even though according to this:

enter image description here

another password pswd2 was set for 127.0.0.1.

What is the problem here? How can I set 3 different passwords for ‘root’@’localhost’, ‘root’@’127.0.0.1’ and ‘root’@’::1′ and login to those accounts with those passwords?

Go to Source
Author: JConstantine

What is the purpose of running PHP-FPM in its own container instead in the same container as apache2/nginx?

I’m fairly new to Docker so excuse any obvious ignorance or misunderstandings. That said, I’ve been coding and configuring web applications for a long time now. I have recently been dabbling with some more sophisticated (Docker and “traditional”) web/application server setups and experimenting with performance enhancements and simplifying deployments.

My personal favorite configuration to use for most projects thus far is nginx as a (mostly) “static file” web server &&|| caching mechanism &&|| Load Balancer in a reverse proxy config with Apache v2.4+ as the “backend” which runs PHP (also Perl &&|| Python) in a “FastCGI” (PHP-FPM) configuration (with mod_php disabled) and mpm_events (instead of mpm_prefork). We can always add in something else also such as REDIS or memcached, where applicable (but I rarely see a significant performance gain when using Cloudflare caching).

I’ve been dabbling with a few different ways to keep my “favorite web server composition” flexible and “ephemeral” enough to accommodate any and all possible options for further performance gain or resource load management. What I keep seeing is that there seems to have been a shift towards using PHP-FPM in its own separate “stand-alone” Docker container sometimes around late 2019.

Why?

While I can appreciate keeping resources isolated and separate for an easier to debug/config/secure configuration(s), I don’t fully understand what the benefits are to having PHP-FPM in a separate Docker container that is implicitly REQUIRED by the application that the Docker containers are comprising.

If anything, having a separate PHP-FPM container seems like additional configuration, additional resource consumption and an even more complicated build/deploy consideration in a CI/CD pipeline.

I can even get onboard with “simple preference”, such as instead of using Apache2 and nginx on the same Ubuntu/Debian or RHEL/CentOS container, break off nginx into its own container(s) and simply config your hostname(s) &&|| IPs and ports appropriately.

But what is a practical use case and advantages for having PHP-FPM separated from Apache or nginx containers for any additional benefit beyond perhaps using more Dockerfile &&|| docker-compose.yaml and less ba.sh scripting to get the same job done? If any part of the application fails or falters, then functionality will suffer. The additional network &&|| docker-network communications and hostname resolutions seems like it would just be a performance penalty over combining certain aspects of the docker-composer-ed application together, rather than splitting them up simply for the sake of splitting them up.

Go to Source
Author: kanidrive

issue with ubuntu

when I install ubuntu after a week it went slow, as I am not technical user
my simple use with chrome and class recording only, but after some days it takes much time to start and when I open browser it take long time to load.
i am using 20 version of ubuntu but I get some updates so I updated. but next morning it did not start and ” failed to handle for ACIP object” shown.
as I search for solution and some how it is worked.
but I think issue is still there.
please can you tell me whats wrong with this,because of this I am not able to attend my online class.
please let me know the solution as soon as possible.

Go to Source
Author: anjali deo

select from table programatically in mysql

Need something where I can select from a table using a function.

Example:

SELECT * FROM HEX(UNHEX('users')) which should select from a table called “users”

This is just an example but the general idea is I want to be able to use functions to define what table I select from. Or maybe an alternative?

Go to Source
Author: Charlie Wells

How to resize docker volume after creation?

I would like to limit my docker volumes in size, So I don’t have one container crashing my entire environment because of excessive writes depleting my storage space.

I found it is possible to limit volumes size at creation time using --opt o=size=100m
However Is it possible to change the size limit after I have created the volume?

Go to Source
Author: GKman

How to test a core pipeline repo against another, dependent repo with Azure DevOps PR build validation?

Context

In my Azure DevOps project, I have multiple project repos using a common Azure DevOps pipeline. I have accomplished this by defining the pipeline in a core pipeline repo and then referring to it from each project repo.

Repo: core-pipeline

File: pipeline.yml

jobs:
  - job: exampleJob
    displayName: Example job
    steps:
      - checkout: core_pipeline # Not magic; see azure-pipelines.yml below.
        path: $(PIPELINE_DIR_RELATIVE)
      - checkout: self
        path: $(PROJECT_DIR_RELATIVE)
      - task: ShellScript@2
        displayName: Show project repo info
        inputs:
          scriptPath: $(PIPELINE_DIR)/showProjectInfo.sh
          cwd: $(PROJECT_DIR)

variables:
  __SOURCES_DIR__: s
  PIPELINE_DIR_RELATIVE: $(__SOURCES_DIR__)/core-pipeline
  PIPELINE_DIR: $(Pipeline.Workspace)/$(PIPELINE_DIR_RELATIVE)
  PROJECT_DIR_RELATIVE: $(__SOURCES_DIR__)/$(Build.Repository.Name)
  PROJECT_DIR: $(Pipeline.Workspace)/$(PROJECT_DIR_RELATIVE)

File: showProjectInfo.sh

#!/bin/bash

pwd
ls -al

Repo: example-project

File: azure-pipelines.yml

resources:
  repositories:
    - repository: core_pipeline # Must match `checkout:` in core pipeline repo.
      type: git
      name: MyAzureDevOpsProject/core-pipeline
      ref: master

trigger:
  - master

extends:
  template: pipeline.yml@core_pipeline # Must match `repository:` above.

(This is the only piece of code duplicated across project repos.)

Problem

The setup described above works well. For example, I can have build validation for pull requests in example-project, and I can trigger pipeline runs manually from the Azure DevOps web GUI. In the latter case, I can optionally select another branch for the core_pipeline repository resource, which is useful for testing changes to the core-pipeline repo before merging them.

However, there is nothing preventing me from merging PRs in core-pipeline without having tested them whatsoever. I have to remember to manually test each project repo against the core-pipeline PR branch before merging the PR, which is somewhat tedious and, above all, very error-prone. There is effectively no protection against making arbitrarily bad changes to the core pipeline, breaking the workflow of all project repos depending on it.

I can add example-project‘s pipeline to the build validation policies for PRs in core-pipeline, but then core-pipeline‘s master branch is used for the validation build, which is useless; the point is to use the PR branch.

Goal

I would like to have a build validation policy for PRs in the core-pipeline repo such that example-project must pass a pipeline run using that PR branch of core-pipeline before the PR can me merged.

It is not necessary that all project repos using the core pipeline repo automatically be covered by said build validation; I’d be perfectly happy if I had to manually select (once) which project repos should be included.

Important note

The core pipeline consists of two equally important parts:

  • the YAML template – processed at template compile time, before the pipeline run is started
  • the Bash script – cloned at runtime in the checkout task

Checking out a specific ref using inline syntax, e.g.

- checkout: 'git://MyAzureDevOpsProject/core_pipeline@${{ parameters.ref }}'
  path: $(PIPELINE_DIR_RELATIVE)

is not a (complete) solution, because even if parameters.ref can be populated with the name of the PR branch, only the Bash script will be affected; the YAML template will still be from the master branch, not the PR branch.

Go to Source
Author: Simon Alling

How to migrate data from custom site to WordPress?

Currently I have a site with around 6.000 articles running on a custom built cms platform, but due to demand from the client, we have been tasked to convert all data to WordPress. Everything is stored in MongoDB in a custom structure (nothing extraordinary) and I have been looking at how I could convert the data to match the XML structure of WordPress for easy import, though not sure if I have to build such conversion script myself, or if an open source data-pairing tool exists, where I can match a MongoDB query and its data to the designated fields in a WordPress DB.

The article contents have been structured using EditorJS which acts a lot in the same way as the Gutenberg editor in WordPress, though I would need to know, how I should convert the data to match the Gutenberg editor, or if I should simply render the data as HTML and save it as such in the XML for import?

Categories are stored in a separate collection with a parent -< child structure, and pages pretty much acts as normal html.

Q: How would one go about converting this to WordPress data?

Go to Source
Author: Dimser