QuestionId
stringlengths 8
8
| AnswerId
stringlengths 8
8
| QuestionBody
stringlengths 91
22.3k
| QuestionTitle
stringlengths 17
149
| AnswerBody
stringlengths 48
20.9k
|
---|---|---|---|---|
76378346 | 76378431 | Regular Expression for text box validation. I need to validate textbox. conditions like below
can enter numbers and letters.
need to avoid these characters !@#$%^&*+=
Currently i am using below regular expression. but it is not working.
<input type="text" pattern="^[\w-\.]+@([\w-]+\.)+[\w-]{2,4}$" />
can u help me to find correct regular expression
| How to modify a regular expression to validate a text box with letters and numbers only, excluding certain characters like !@#$%^&*+= | The regulation expression you are currently using is for validate email addresses.
So you have to use a regular expression to validate textboxes, make it only contain letters and numbers and avoid the characters (!@#$%^&*+=).
^[a-zA-Z0-9]+$
This will match any strings that is a letter or a number. It will not match any strings that any characters !@#$%^&*+=.
<input type="text" pattern="^[a-zA-Z0-9]+$" />
Hope this will help. Thank you!
Refer to this link if you need more clarifications https://laasyasettyblog.hashnode.dev/validating-username-using-regex
Improving my answer with the question u asked,
<!DOCTYPE html>
<html>
<head>
<title>Test Pattern</title>
</head>
<body>
<input type="text" id="input" pattern='^[a-zA-Z0-9./|()?~<>":;]+$' />
<button onclick="validate()">Validate</button>
<script>
function validate() {
var input = document.getElementById("input").value;
var pattern = /[a-zA-Z0-9./|()?~<>":;]+/;
if (pattern.test(input)) {
alert("Valid");
} else {
alert("Invalid");
}
}
</script>
</body>
</html>
|
76378408 | 76378435 | I'm trying to create a blind auction. So basically it will ask for your name and then your bid. After that, it will ask if there are any bidders, if yes it will ask you the name and the bid price. But after you said yes the terminal will be cleaned. So that the other bidder can't see how much the other person bid, if I run print on the [data_base] it can't print more than two keys and value.
Here is the output:
What is your name?: Gael
What is your bid: $560
Are there any other bidders? Type 'yes or 'no'.
yes
\[({'Gael': \['560'\]},)\]
What is your name?: Mikey
What is your bid: $350
Are there any other bidders? Type 'yes or 'no'.
yes
\[({'Mikey': \['350'\]},)\]
What is your name?: Josh
What is your bid: $298
Are there any other bidders? Type 'yes or 'no'.
no
Here is the final output:
[({'Mikey': ['350']},), ({'Josh': ['298']},)]
Gael's name and his bid are missing.
Here is the code:
import os
while True:
name = input("What is your name?: ")
bid = input("What is your bid: $")
other_user = input("Are there any other bidders? Type 'yes or 'no'.\n")
if other_user == 'yes':
os.system('cls')
data_base = [
]
def new_user(name, bid):
brandnew_user = {
name: [bid]
},
data_base.append(brandnew_user)
new_user(name, bid)
print(data_base)
if other_user == 'no':
break
Thank you!!
I was expecting that Gael's name and bid will be recorded. But it did not, it only recorded, Mikey and Josh.
| How can i add new key and value in a existing list | Here's a better way to organize things. Also, I'm not sure why you are creating a list of tuples of dictionaries. Why not just make data_base a dictionary and store the new entries as keys?
import os
data_base = []
while True:
name = input("What is your name?: ")
bid = input("What is your bid: $")
data_base.append( {name: [bid]} )
print(data_base)
other_user = input("Are there any other bidders? Type 'yes or 'no'.\n")
if other_user == 'no':
break
Here's what I'm talking about:
import os
data_base = {}
while True:
name = input("What is your name?: ")
bid = input("What is your bid: $")
data_base[name] = [bid]
print(data_base)
other_user = input("Are there any other bidders? Type 'yes or 'no'.\n")
if other_user == 'no':
break
|
76378340 | 76378439 | I'm getting error in Android Studio on second "cannot resolve symbol second" how to fix it so that it loops from 358 to 331 in this example?
package com.example.myapp;
import androidx.annotation.NonNull;
import androidx.appcompat.app.AppCompatActivity;
import android.content.Intent;
import android.os.Bundle;
import android.view.View;
import android.widget.RelativeLayout;
import com.pierfrancescosoffritti.androidyoutubeplayer.core.player.YouTubePlayer;
import com.pierfrancescosoffritti.androidyoutubeplayer.core.player.listeners.AbstractYouTubePlayerListener;
import com.pierfrancescosoffritti.androidyoutubeplayer.core.player.views.YouTubePlayerView;
public class FingerStretching extends AppCompatActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_finger_stretching);
YouTubePlayerView youTubePlayerView = findViewById(R.id.youtube_player_view);
getLifecycle().addObserver(youTubePlayerView);
youTubePlayerView.addYouTubePlayerListener(new AbstractYouTubePlayerListener() {
String videoId = "mSZWSQSSEjE";
@Override
public void onReady(@NonNull YouTubePlayer youTubePlayer) {
youTubePlayer.loadVideo(videoId, 331);
}
public void onCurrentSecond(@NonNull YouTubePlayer youTubePlayer) {
if(second == 358) youTubePlayer.seekTo(331);
}
});
}
}
tried creating local variable second
| How to repeat video with start and end time in Android Studio? | According to the source code, the signature of onCurrentSecond is
override fun onCurrentSecond(youTubePlayer: YouTubePlayer, second: Float)
You are not overriding it. It should be
@Override
public void onCurrentSecond(@NonNull YouTubePlayer youTubePlayer, float second) {
if(second >= 358) youTubePlayer.seekTo(331);
}
Such kind of error is easily avoidable if you make use of the auto complete feature in the IDE. Typing onC within the AbstractYouTubePlayerListener should give you auto complete option for onCurrentSecond, selecting it should automatically write the override function for you with correct signature.
|
76378344 | 76378496 | How to use React functions in CodePen?
I wrote a react function in CodePem to test React hooks, however it constantly keeps reporting errors: Uncaught ReferenceError: require is not defined.
My Code:
import {useState, useEffect,useRef } from 'react';
function Test() {
const [count, setCount] = useState(0);
const prevRef = useRef();
useEffect(() => {
// const ref = useRef();
console.log('ref----', prevRef.current);
prevRef.current = count;
})
return (
<div>
<div onClick={() => setCount(count+1)}>+1</div>
<div>{`count: ${count}`}</div>
<div>{`precount: ${prevRef.current}`}</div>
</div>
)
}
ReactDOM.render(<Test />, document.getElementById("app"));
| how to use react function in codepen? | You can add a package by adjusting the settings in your Pen.
Take a look at the following image for reference:
By doing so, it will automatically generate the necessary import statement:
import React, { useState, useEffect, useRef } from 'https://esm.sh/react@18.2.0';
import ReactDOM from 'https://esm.sh/react-dom@18.2.0';
To help you understand this process, I've created a sample code on CodePen. You can refer to this example to implement it yourself.
Here is the codepen link to the sample code: https://codepen.io/camel2243/pen/ExdBRar
|
76378323 | 76378505 | The code I currently have is this, in my views.py I can't figure out how to set up my search function. All other functions work.
models.py
class User(AbstractUser):
"""User can be Employee or Customer"""
class Business(models.Model):
business = models.CharField(max_length=50)
class BusinessOwner(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE, null=True )
business = models.ForeignKey(Business, on_delete=models.CASCADE, null=True)
class Customer(models.Model):
""" Customer-specific information """
user = models.OneToOneField(User, on_delete=models.CASCADE, null=True )
business = models.ForeignKey(Business, on_delete=models.CASCADE, null=True)
class Employee(models.Model):
""" Employee-specific information """
user = models.OneToOneField(User, on_delete=models.CASCADE, null=True)
business = models.ForeignKey(Business, on_delete=models.CASCADE, null=True, blank=True)`
forms.py
class UserForm(UserCreationForm):
class Meta:
model = User
fields = ( "username", "email", "password1", "password2", "first_name", "last_name", )
class BusinessOwnerForm(forms.ModelForm):
. . . no fields
class EmployeeForm(forms.ModelForm):
. . . no fields
class CustomerForm(forms.ModelForm):
. . . no fields
class BusinessForm(forms.ModelForm):
class Meta:
model = Business
fields = ( "business", )
views.py (user creation process)
def searchUsers(request):
qs_owned_businesses = BusinessOwner.objects.filter(user = request.user).values('business_id')
qs_biz_customers = Customer.objects.filter(business_id__in=qs_owned_businesses)
if request.method == "GET":
query = request.GET.get('search')
if query == '':
query = 'None'
results = User.objects.filter(username__icontains=query, id__in=qs_biz_customers)
return render(request, 'search_users.html', {'query': query, 'results': results})
#example of hows employees and customers are created in my views:
def employeeCreation(request):
"""Creates an Employee"""
if request.method == "POST":
employee_form = EmployeeForm(request.POST)
user_creation_form = UserForm(request.POST)
if (user_creation_form.is_valid() and employee_form.is_valid()):
employee_form.instance.business = request.user.businessowner.business
new_user = user_creation_form.save(commit=False)
employee_form.instance.user = new_user
user_creation_form.save()
employee_form.save()
messages.success(request, "You Have Created An Employee" )
return redirect("user-homepage")
else:
messages.error(request, "Try creating an Employee Again something went wrong.")
employee_form = EmployeeForm()
user_creation_form = UserForm()
return render (request, "registration/employee_creation.html",
context={"user_creation_form": user_creation_form,
"employee_form": employee_form,
})
def customerCreation(request):
. . . functions is exactly the same as employee creation just for a customer. The Business owner's business is used as a starting point to build employees off of. I didn't incldue that view because it's not necessary for this and stack overflow limits how much code I put here.
search_users.html
{% if results %}
you Searched for {{ query }} . .
{% for x in results %}
{{ x }}<p></p>
{% endfor %}
{%endif %}```
I have tried using Q, icontain ,.filter() and django-filter, but this is a tricky search criteria that I can't get to work.
navbar search feature:
<form action="{% url 'search-users' %}" class="form-inline" method="get">
<div class="form-group mx-sm-3 mb-2">
<label for="" class="sr-only">search</label>
<input name="search" type="" class="form-control" id="" placeholder="Keyword">
</div>
<button type="submit" class="btn btn-success btn-lg mb-2">Search</button>
</form>```
| Search Customers that are part of the logged on User's Business? | Let's break this down into tasks. I'm using values() to limit the request to what we're interested in, as I can then use that result to filter further.
#First you want to get all the businesses the logged in user owns
#(Currently they can only own one, so you could use get rather than filter,
#but you might change that later and this approach will still work)
qs_owned_businesses = BusinessOwner.objects.filter(user = request.user).values('business_id')
#next you want to get all the customers of those businesses
qs_biz_customers = Customer.objects.filter(business_id__in= qs_owned_businesses).values('user_id')
#finally you want to filter those customers further based on your form field
#remember, the icontains criteria needs to refer to a field
#here we're looking at username, but you might use last_name or something else.
results = User.objects.filter(username__icontains=query, id__in=qs_biz_customers)
results should now be a list of users you can cycle through in your template to show names, usernames etc.
|
76378468 | 76378523 | ALL,
I made a local branch in my fork a long time ago and pushed some changes to it. I then submitted a PR which was passed the CI build.
Now after some time I came back to the same machine I produced the PR but for some reason I didn't check which branch I was on and made couple of commits on the old branch and pushed them therefore screwing up the PR (it was not yet merged, due to the lack of the test code).
Now what I'd like to do is go to Github Web interface, remove those commits, but keep them locally, because I can just generate a patch on my local machine, remove those commits, switch to the new branch and apply the patch to it.
Or maybe even there is a better solution?
So how do I solve this mess?
Keep in mind - I intend to finish the PR with the test, but those are 2 completely unrelated things.
TIA!!
EDIT:
Everythig worked fine and my old branch on the original laptop is back to normal and the PR is now good.
However, in order to put the unit test I had to go to a different machine and do a git pull. For some unknown reason after that the git tree on that machine becomes clogged with everything including the bad commit.
I was able to revoke bad commits with git reset --hard N, but I fear that the same happen when I try to test my unit test on all platforms/different laptops which means my changes will be lost and I will need to redo them again for the UT on all different machines.
Can you help me here as well?
TIA!!
| Remove remote commits on the branch in GitHub | After some thought, my original answer is more complicated than strictly necessary, but I'll leave it below.
The easiest way to get your original branch back to its old state and keep the new commits is to create a new branch then reset the old branch and force push. It looks like this:
git checkout old-branch
git branch new-branch
git reset --hard <hash of commit you want to keep in old-branch>
git push -f
Alternatively you can use
git reset --hard HEAD~n
where n is the number of commits you want to remove from the old branch.
Now you can do whatever you wish with the new branch, such as rebase it onto main. This might not be entirely necessary. If for example, your PR is merged, you will need to pull those changes into the new branch anyway before making the second PR. However, if you want to make a 2nd PR before the 1st is merged, then it is better to keep them separate until one of them is merged.
TLDR
The easiest way to fix a remote repository is to first make the changes locally and then push, possibly force push, to GitHub or other remote.
Details
You can do this all locally first, then push to GitHub to fix the PR. First, you should create a new branch and git cherry-pick the commits that you want to keep but remove from the other branch.
Start by getting the hashes of the commits you want:
git checkout old-branch
git log --oneline --graph
Copy the commit hashes for the commits you want to move. Then do
git checkout -b new-branch main
and for each of the hashes you copied:
git cherry-pick <hash>
Alternatively, you can do this more easily with git rebase. You only need the hash of the oldest commit you want to keep:
git checkout -b new-branch old-branch
git rebase --onto main <hash of oldest commit>~
Now go back to your old branch and get rid of all the commits you no longer want:
git checkout old-branch
git reset --hard <hash of the first commit you want to keep on this branch>
Finally force push:
git push -f
This will automatically update the PR back to its original state, if you used the correct hash for the git reset command.
|
76378419 | 76378558 | I am creating a google chrome extension. On the popup, I am displaying a leaderboard. However, I am new to JavaScript so I don't know how to properly use async. I am using chrome.storage to get stored scores to display on the leaderboard, then sending them from background.js to score.js. My issue is that, since chrome.storage.get happens asynchronously, my findScores method does not wait for chrome.storage.get to finish before incorrectly returning a default empty score.
Here is my code:
background.js
chrome.runtime.onMessage.addListener(
function(request, sender, sendResponse) {
console.log(sender.tab ?
"from a content script:" + sender.tab.url :
"from the extension");
if (request.type === "request") {
var scoresVar = findScores(request.table, "All");
console.log("Sending response " + scoresVar);
sendResponse({scores: scoresVar})
}
else if (request.type === "score") {
saveScore(request.website, request.score, request.tab);
sendResponse("Finished adding score " + request.score);
}
}
);
function findScores(table, website) {
const categories = table.split("-");
if (categories.includes("personal")) {
chrome.storage.sync.get([website], function(response) {
if (!(typeof response[website] === 'undefined')) {
console.log("Found " + response[website]);
return response[website];
}
});
} else if (categories.includes("global")){
// TODO: Add global leaderboards
return ["-"];
}
console.log("Didn't find, on default");
return ["-"];
}
popup.js
async function requestScores(tableID) {
var url = "All"
if (tableID.includes("current")) {
var url = await getCurrentTab();
}
console.log("Sending message to load scores to " + url);
(async () => {
const response = await chrome.runtime.sendMessage({type: "request", request: "load scores", table: tableID, tab: url});
console.log("Received: " + response);
// add scores to HTML DOM
});
})();
}
My console messages reveal that I first return a default score, which is sent to popup.js. I have tried throwing async keywords in front of functions (as well as "await" in front of variables like scoresVar = await findScores(request.table, "All") but it just caused more issues, where findScores still returned a default value, but background.j instead sent an undefined promise.
How can I fix my code?
| How to use async properly to get chrome.storage? | It is simpler to work with Promises and async/await instead of callbacks. chrome.storage.sync.get returns a Promise if you do not pass a callback.
async function findScores(table, website) {
// ...
if (categories.includes("personal")) {
const response = await chrome.storage.sync.get([website]);
if (response[website] !== undefined) {
console.log("Found " + response[website]);
return response[website];
}
}
// ...
}
// ...
chrome.runtime.onMessage.addListener(function(request, sender, sendResponse) {
// ...
findScores(request.table, "All").then(scores => {
console.log("Sending response " + scores);
sendResponse({scores});
});
return true; // keep the messaging channel open for sendResponse
});
Note that the callback of onMessage should return a literal true value (documentation) in order to keep the internal messaging channel open so that sendResponse can work asynchronously.
|
76383950 | 76384028 | I have a text field and it has an onSubmit method, inside which I check for validation and then focus on another field, but for some reason the focus does not work
onSubmitted: (value) {
//print("ga test");
if (!widget.validator?.call(value)) {
setState(() {
showError = true;
});
}
if (widget.nextFocus != null) {
FocusScope.of(context).requestFocus(widget.nextFocus);
}
},
| How do I change the focus of the text field on Submit? | I did so and it worked
if (widget.validator != null) {
setState(() {
showError = !widget.validator?.call(value);
});
}
if (widget.nextFocus != null) {
FocusScope.of(context).requestFocus(widget.nextFocus);
}
|
76380624 | 76380646 | I have an application that was executing TestNG tests perfectly with maven, for example, when using a mvn clean install command.
Currently I have updated the application to start using Spring Boot 3.1.0, and now the tests are completely ignored. No tests are executed.
I am using a classic testng.xml file defined on the maven-surefire-plugin:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>${maven-surefire-plugin.version}</version>
<configuration>
<suiteXmlFiles>
<suiteXmlFile>src/test/resources/testng.xml</suiteXmlFile>
</suiteXmlFiles>
</configuration>
</plugin>
All solutions I have found are related about the java classes ending on *Test.java but this is not applied as I am using the testng suite file. And before the update, the tests are working fine.
What has been changed into Spring Boot 3 to skip my tests?
| Testng test are ignored after upgrading to Sprint Boot 3 and maven-surefire-plugin 3.1.0 | Ok, I have found the "issue". Seems that the new versions of maven-surefire-plugin needs to include a surefire-testng extra plugin for executing it:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>3.1.0</version>
<configuration>
<suiteXmlFiles>
<suiteXmlFile>src/test/resources/testng.xml</suiteXmlFile>
</suiteXmlFiles>
</configuration>
<dependencies>
<dependency>
<groupId>org.apache.maven.surefire</groupId>
<artifactId>surefire-testng</artifactId>
<version>3.1.0</version>
</dependency>
</dependencies>
</plugin>
After including the dependency on the plugin, now is working fine.
|
76380600 | 76380785 | I'm using Okta provider to create okta_app_oauth and okta_app_group_assignments. My module looks like:
resource "okta_app_oauth" "app" {
label = var.label
type = var.type
grant_types = var.grant_types
redirect_uris = var.type != "service" ? var.redirect_uris : null
response_types = var.response_types
login_mode = var.login_mode
login_uri = var.login_uri
post_logout_redirect_uris = var.post_logout_redirect_uris
consent_method = var.consent_method
token_endpoint_auth_method = var.token_endpoint_auth_method
pkce_required = var.token_endpoint_auth_method == "none" ? true : var.pkce_required
lifecycle {
ignore_changes = [
client_basic_secret, groups
]
}
}
resource "okta_app_group_assignments" "app" {
app_id = okta_app_oauth.app.id
dynamic "group" {
for_each = var.app_groups
content {
id = group.value["id"]
priority = group.value["priority"]
}
}
}
And it works when I assign groups to application, but when I don't want to assign groups, I get error:
β Error: Invalid index
β
β on main.tf line 26, in resource "okta_app_group_assignments" "app":
β 26: id = group.value["id"]
β βββββββββββββββββ
β β group.value is empty map of dynamic
β
β The given key does not identify an element in this collection value.
in addition, my app_groups variable looks like:
variable "app_groups" {
description = "Groups assigned to app"
type = list(map(any))
default = [{}]
}
I was trying to use lookup(group, "priority", null), but it wasn't resolving my problem. Can somebody help me with solving this?
| Terragrunt - make dynamic group optional | You can make the block optional as follows:
dynamic "group" {
for_each = length(var.app_groups) > 0 : var.app_groups : []
content {
id = group.value["id"]
priority = group.value["priority"]
}
}
also your default value for app_groups should be:
variable "app_groups" {
description = "Groups assigned to app"
type = list(map(any))
default = []
}
|
76378487 | 76378586 | I have a table PetsTable:
Id
Type
key
value
1
"Cat"
10
5
1
"Cat"
9
2
2
"dog"
10
5
1
"Cat"
8
4
1
"Cat"
6
3
2
"dog"
8
4
2
"dog"
6
3
3
"Cat"
13
5
3
"Cat"
10
0
3
"Cat"
8
0
How to insert this data into a new table MyPets from PetsTable with these conditions:
Group by Id
Only select rows when in the group exists (key = 10 and value = 5) and (key = 8 and value = 4) and (key = 6 and value = 3)
If exists key = 9, then mark hasFee = 1 else hasFee = 0
Final table should look like:
Id
Type
hasFee
1
"Cat"
1
2
"dog"
0
| Group by and select rows based on if value combinations exist | One approach is to use window functions to evaluate your conditions, which you can then apply as conditions using a CTE.
This creates the data you desire, its then trivial to insert into a table of your choice.
create table Test (Id int, [Type] varchar(3), [Key] int, [Value] int);
insert into Test (Id, [Type], [Key], [Value])
values
(1, 'Cat', 10, 5),
(1, 'Cat', 9, 2),
(2, 'Dog', 10, 5),
(1, 'Cat', 8, 4),
(1, 'Cat', 6, 3),
(2, 'Dog', 8, 4),
(2, 'Dog', 6, 3),
(3, 'Cat', 13, 5),
(3, 'Cat', 10, 0),
(3, 'Cat', 8, 0);
with cte as (
select *
, sum(case when [Key] = 10 and [Value] = 5 then 1 else 0 end) over (partition by Id) Cond1
, sum(case when [Key] = 8 and [Value] = 4 then 1 else 0 end) over (partition by Id) Cond2
, sum(case when [Key] = 6 and [Value] = 3 then 1 else 0 end) over (partition by Id) Cond3
, sum(case when [Key] = 9 then 1 else 0 end) over (partition by Id) HasFee
from Test
)
select Id, [Type], HasFee
from cte
where Cond1 = 1 and Cond2 = 1 and Cond3 = 1
group by Id, [Type], HasFee;
Returns:
Id
Type
HasFee
1
Cat
1
2
Dog
0
Note: If you provide your sample data in this format (DDL+DML) you make it much easier for people to assist.
db<>fiddle
|
76380579 | 76380826 | For work I'm needing to connect to test nodes and establish a vnc connection so you can see the desktop remotely. It's a manual process with a bunch of commands that need to be executed in order. Perfect for automation using a bash script. The problem is that some commands need to be executed on the remote node after an ssh connection is established.
Currently I've got it working like this, where startVNC is a seperate bash file which stores the commands that need to be executed on the remote node after an ssh connection is established.
cat startVNC | sed -e "s/\$scaling/$scaling/" -e "s/\$address/$address/" -e "s/\$display/$display/" | ssh -X maintain@$host
For my question the contents of startVNC don't really matter, just that multiple commands can be executed in order. It could be:
echo "hello"
sleep 1
echo "world"
While for personal use this solution is fine, I find it a bit of a bother that this needs to be done using two separate bash files. If I want to share this file (which I do) it'd be better if it was just one file. My question is, is it possible to mimic the output from cat in some way using a variable?
| How to store multiple commands in a bash variable (similar to cat otherscript.sh) | Well, you could do:
a="echo 'hello'\nsleep 2\necho world\n"
echo -e $a
# output-> echo 'hello'
# output-> sleep 2
# output-> echo world
echo -e $a | bash
# output-> hello
# waiting 2 secs
# output-> world
The -e in echo enables the interpretation of the \n.
|
76383957 | 76384041 | I have a demo Spring Integration project which is receiving Kafka messages, aggregating them, and then releasing them. I'm trying to add JdbcMessageStore to the project. The problem is that it failing with error:
Caused by: java.lang.IllegalArgumentException: Cannot store messages without an ID header
at org.springframework.util.Assert.notNull(Assert.java:201) ~[spring-core-5.2.15.RELEASE.jar:5.2.15.RELEASE]
at org.springframework.integration.jdbc.store.JdbcMessageStore.addMessage(JdbcMessageStore.java:314) ~[spring-integration-jdbc-5.3.8.RELEASE.jar:5.3.8.RELEASE]
After debugging I found that it requires the UUID header id in this message. But the problem is that I can't manually set the Kafka header id - it is forbidden (the same as timestamp header) - I tried to do this in Kafka producer in different project.
If I'm using IDEA plugin named Big Data Tools and send a message from there I'm able to set id header but it is received by my project as an array of bytes and it is failing with error
IllegalArgumentException Incorrect type specified for header 'id'. Expected [UUID] but actual type is [B]
I can't find any solution on how to resolve this issue. I need to set somehow this id header to be able to store messages in the database.
Thanks in advance
| How to set ID header in Spring Integration Kafka Message? | The KafkaMessageDrivenChannelAdapter has an option:
/**
* Set the message converter to use with a record-based consumer.
* @param messageConverter the converter.
*/
public void setRecordMessageConverter(RecordMessageConverter messageConverter) {
Where you can set a MessagingMessageConverter with:
/**
* Generate {@link Message} {@code ids} for produced messages. If set to {@code false},
* will try to use a default value. By default set to {@code false}.
* @param generateMessageId true if a message id should be generated
*/
public void setGenerateMessageId(boolean generateMessageId) {
this.generateMessageId = generateMessageId;
}
/**
* Generate {@code timestamp} for produced messages. If set to {@code false}, -1 is
* used instead. By default set to {@code false}.
* @param generateTimestamp true if a timestamp should be generated
*/
public void setGenerateTimestamp(boolean generateTimestamp) {
this.generateTimestamp = generateTimestamp;
}
set to true.
This way the Message created from a ConsumerRecord will have respective id and timestamp headers.
You also simply can have a "dummy" transformer to return incoming payload and the framework will create a new Message where those headers are generated.
|
76383902 | 76384109 | I have some SQL that does some manipulation to the data i.e. filling in empty columns.
SELECT *,
ModifiedLineData = CASE
WHEN Column2 = '' AND LineData NOT LIKE ',,,0,,,,0'
THEN CONCAT(STUFF(LineData, CHARINDEX(',', LineData, CHARINDEX(',', LineData) + 1), 0, '"No PO Number"'), ',""')
ELSE CONCAT(LineData, ',""')
END
FROM (
SELECT
*,
Column2 = CONVERT(XML, '<s>' + REPLACE((SELECT ISNULL(LineData, '') FOR XML PATH('')), ',', '</s><s>') + '</s>').value('/s[2]', 'varchar(100)')
FROM [dbo].[Temp_Raw_Data]
WHERE LineData NOT LIKE ',,,0,,,,0'
) AS Subquery
Now lets say this returns
FileName
LineNumber
LineData
Column2
ModifiedLineData
file1
4
1232,,"product-1", 1,0
1232,NA,"product-1", 1,0
file2
7
"failed"
NULL
"failed"
file3
8
1235,,"product-2", 1,0
1235,NA,"product-2", 1,0
How can I modify this query so that if Column2 is NULL then it would concatenate the LineData onto the next row (ModifiedLineData) else just concatenate a ,"" and then remove that NULL result (if possible else it doesnt matter) so that my result would look like:
FileName
LineNumber
LineData
Column2
ModifiedLineData
file1
4
1232,,"product-1", 1,0
1232,NA,"product-1", 1,0,""
file3
8
1235,,"product-2", 1,0
1235,NA,"product-2", 1,0,"failed"
I tried playing around with LEAD() but couldn't get it how i wanted.
Note: Two null rows are not possible to be together. This is due to the nature of the data. The next row should simply be the next available row when selecting all rows as they are imported one by 1.
Updated Query that isn't concatenating:
SELECT *
FROM (SELECT FileName, LineNumber, LineData, Column2,
CASE WHEN LAG(Column2) OVER(ORDER BY LineNumber) IS NULL
THEN CONCAT_WS(', ',
ModifiedLineData,
LAG(ModifiedLineData) OVER(ORDER BY LineNumber))
ELSE ModifiedLineData
END AS ModifiedLineData
FROM (
SELECT *,
ModifiedLineData = CASE
WHEN Column2 = '' AND LineData NOT LIKE ',,,0,,,,0'
THEN CONCAT(STUFF(LineData, CHARINDEX(',', LineData, CHARINDEX(',', LineData) + 1), 0, '"No PO Number"'), '')
ELSE CONCAT(LineData, '')
END
FROM (
SELECT *,
Column2 = CONVERT(XML, '<s>' + REPLACE((SELECT ISNULL(LineData, '') FOR XML PATH('')), ',', '</s><s>') + '</s>').value('/s[2]', 'varchar(100)')
FROM [backstreet_WMS_Optimizer].[dbo].[Temp_GoodsIn_Raw_Data]
WHERE LineData NOT LIKE ',,,0,,,,0'
) AS Subquery
) AS cte
) AS Subquery
WHERE Column2 IS NOT NULL
order by FileName, LineNumber
| Concatenate onto Next Row | Given that you can't have consecutive NULL values, using LEAD/LAG should be suitable for this task. Without knowledge of your original data, we can work on your query and add on top two subqueries, last of which is optional:
the inner adds the information needed to the record successive to "Column2=NULL" records
the outer removes records having those null values
SELECT *
FROM (SELECT FileName, LineNumber, LineData, Column2,
CASE WHEN LAG(Column2) OVER(ORDER BY LineNumber) IS NULL
THEN CONCAT_WS(', ',
ModifiedLineData,
LAG(ModifiedLineData) OVER(ORDER BY LineNumber))
ELSE ModifiedLineData
END AS ModifiedLineData
FROM <your query>) cte
WHERE Column2 IS NOT NULL
Output:
FileName
LineNumber
LineData
Column2
ModifiedLineData
file1
4
1232,,"product-1", 1,0
1232,NA,"product-1", 1,0
file3
8
1235,,"product-2", 1,0
1235,NA,"product-2", 1,0"failed"
Check the demo here.
|
76378480 | 76378588 | I'm working through The Odin Project and I'm having trouble making my main content take up the rest of the space of the browser.
Right now it looks like this:
The 1px solid red border is as far as the main content goes. I have tried this but it's not allowing for a fixed header and footer. I have also tried some other flex solutions. Those are commented out in the code.
Am I just doing this whole thing wrong? Is there a standard way that I don't know about?
index.html:
<body>
<div class="header">
<h1>
MY AWESOME WEBSITE
</h1>
</div>
<div class="main-content">
<div class="sidebar">
<ul>
<li><a href="#">β - link one</a></li>
<li><a href="#">π¦Έπ½ββοΈ - link two</a></li>
<li><a href="#">ποΈ - link three</a></li>
<li><a href="#">ππ½ - link four</a></li>
</ul>
</div>
<div class="content">
<div class="card">Lorem ipsum dolor sit amet consectetur adipisicing elit. Tempora, eveniet? Dolorem
dignissimos
maiores non delectus possimus dolor nulla repudiandae vitae provident quae, obcaecati ipsam unde impedit
corrupti veritatis minima porro?</div>
<div class="card">Lorem ipsum dolor sit amet consectetur adipisicing elit. Quasi quaerat qui iure ipsam
maiores
velit tempora, deleniti nesciunt fuga suscipit alias vero rem, corporis officia totam saepe excepturi
odit
ea.
</div>
<div class="card">Lorem ipsum dolor sit amet consectetur, adipisicing elit. Nobis illo ex quas, commodi
eligendi
aliquam ut, dolor, atque aliquid iure nulla. Laudantium optio accusantium quaerat fugiat, natus officia
esse
autem?</div>
<div class="card">Lorem ipsum dolor sit amet consectetur adipisicing elit. Necessitatibus nihil impedit eius
amet
adipisci dolorum vel nostrum sit excepturi corporis tenetur cum, dolore incidunt blanditiis. Unde earum
minima
laboriosam eos!</div>
<div class="card">Lorem ipsum dolor sit amet consectetur, adipisicing elit. Nobis illo ex quas, commodi
eligendi
aliquam ut, dolor, atque aliquid iure nulla. Laudantium optio accusantium quaerat fugiat, natus officia
esse
autem?</div>
<div class="card">Lorem ipsum dolor sit amet consectetur adipisicing elit. Necessitatibus nihil impedit eius
amet
adipisci dolorum vel nostrum sit excepturi corporis tenetur cum, dolore incidunt blanditiis. Unde earum
minima
laboriosam eos!</div>
</div>
</div>
<div class="footer">
The Odin Project β€οΈ
</div>
</body>
</html>
style-07.css:
:root{
--header-height: 72px;
}
body {
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, 'Open Sans', 'Helvetica Neue', sans-serif;
margin: 0;
min-height: 100vh;
height: 100%;
}
.main-content{
display: flex;
height: 100%; /* If I use px units it will force the main content to go down but I know that is not ideal. */
padding-top: var(--header-height);
flex-direction: row;
border: 1px solid red;
/* Things I have tried from other answers*/
/* flex: 1 1 auto; */
/* height: calc(100% - var(--header-height)); */
}
.sidebar{
flex-shrink: 0;
}
.content {
padding: 32px;
display: flex;
flex-wrap: wrap;
}
.card {
width: 300px;
padding: 16px;
margin: 16px;
}
.header {
position: fixed;
top: 0;
left: 0;
right: 0;
display: flex;
align-items: center;
height: var(--header-height);
background: darkmagenta;
color: white;
padding: 0px 15px;
}
h1 {
font-weight: 1000;
}
.footer {
height: var(--header-height);
background: #eee;
color: darkmagenta;
position: fixed;
bottom: 0;
left: 0;
right: 0;
width: 100%;
height: 5%;
display: flex;
justify-content: center;
align-items: center;
}
.sidebar {
width: 300px;
background: royalblue;
box-sizing: border-box;
padding: 16px;
}
.card {
border: 1px solid #eee;
box-shadow: 2px 4px 16px rgba(0, 0, 0, .06);
border-radius: 4px;
}
ul{
list-style-type: none;
margin: 0;
padding: 0;
}
a {
text-decoration: none;
color: white;
font-size: 24px;
}
li{
margin-bottom: 16px;
}
| How do I get my main content to take up the rest of the space left over after the header and footer? | You can use flex diplay on body instead of using instead of fixed on header and footer and make the body display flex with column direction, then for main-content all you need is to set flex: 1 and remove padding top, flex: 1 will make sure that main-content take any remaining space in the parent. Set the body height to height: 100vh and overflow: hidden, for man-content, set overflow: auto.
Additionally, To make sidebar sticky when scrolling, I added position: relative; to main-content and position: sticky; to the sidebar.
To force header and footer heights and prevent them to be squeezed by the flex position, use min-height instead of height as I modified in the code.
Try to view the run code in full page, if you have any further questions, comment below.
:root {
--header-height: 72px;
}
body {
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, 'Open Sans', 'Helvetica Neue', sans-serif;
margin: 0;
height: 100vh;
overflow:hidden;
display: flex;
flex-direction: column;
}
.main-content {
flex: 1;
display: flex;
overflow-y: auto;
/* If I use px units it will force the main content to go down but I know that is not ideal. */
flex-direction: row;
border: 1px solid red;
/* Things I have tried from other answers*/
/* flex: 1 1 auto; */
/* height: calc(100% - var(--header-height)); */
position: relative;
}
.content {
padding: 32px;
display: flex;
flex-wrap: wrap;
}
.card {
width: 300px;
padding: 16px;
margin: 16px;
}
.header {
display: flex;
align-items: center;
min-height: var(--header-height);
background: darkmagenta;
color: white;
padding: 0px 15px;
}
h1 {
font-weight: 1000;
}
.footer {
min-height: var(--header-height);
background: #eee;
color: darkmagenta;
width: 100%;
height: 5%;
display: flex;
justify-content: center;
align-items: center;
}
.sidebar {
width: 300px;
background: royalblue;
box-sizing: border-box;
padding: 16px;
position: sticky;
top: 0;
white-space: nowrap;
min-height: 250px;
}
.card {
border: 1px solid #eee;
box-shadow: 2px 4px 16px rgba(0, 0, 0, .06);
border-radius: 4px;
}
ul {
list-style-type: none;
margin: 0;
padding: 0;
}
a {
text-decoration: none;
color: white;
font-size: 24px;
}
li {
margin-bottom: 16px;
}
<body>
<div class="header">
<h1>
MY AWESOME WEBSITE
</h1>
</div>
<div class="main-content">
<div class="sidebar">
<ul>
<li><a href="#">β - link one</a></li>
<li><a href="#">π¦Έπ½ββοΈ - link two</a></li>
<li><a href="#">ποΈ - link three</a></li>
<li><a href="#">ππ½ - link four</a></li>
</ul>
</div>
<div class="content">
<div class="card">Lorem ipsum dolor sit amet consectetur adipisicing elit. Tempora, eveniet? Dolorem dignissimos maiores non delectus possimus dolor nulla repudiandae vitae provident quae, obcaecati ipsam unde impedit corrupti veritatis minima porro?</div>
<div class="card">Lorem ipsum dolor sit amet consectetur adipisicing elit. Quasi quaerat qui iure ipsam maiores velit tempora, deleniti nesciunt fuga suscipit alias vero rem, corporis officia totam saepe excepturi odit ea.
</div>
<div class="card">Lorem ipsum dolor sit amet consectetur, adipisicing elit. Nobis illo ex quas, commodi eligendi aliquam ut, dolor, atque aliquid iure nulla. Laudantium optio accusantium quaerat fugiat, natus officia esse autem?
</div>
<div class="card">Lorem ipsum dolor sit amet consectetur adipisicing elit. Necessitatibus nihil impedit eius amet adipisci dolorum vel nostrum sit excepturi corporis tenetur cum, dolore incidunt blanditiis. Unde earum minima laboriosam eos!</div>
<div class="card">Lorem ipsum dolor sit amet consectetur, adipisicing elit. Nobis illo ex quas, commodi eligendi aliquam ut, dolor, atque aliquid iure nulla. Laudantium optio accusantium quaerat fugiat, natus officia esse autem?
</div>
<div class="card">Lorem ipsum dolor sit amet consectetur adipisicing elit. Necessitatibus nihil impedit eius amet adipisci dolorum vel nostrum sit excepturi corporis tenetur cum, dolore incidunt blanditiis. Unde earum minima laboriosam eos!</div>
</div>
</div>
<div class="footer">
The Odin Project β€οΈ
</div>
</body>
</html>
|
76384080 | 76384128 | For whatever reason, my Kotlin program won't initialize variables assigned inside a when statement. Here's the code:
import kotlin.random.Random
import kotlin.random.nextInt
val mood: String
when(Random.nextInt(1..2)) {
1 -> {
mood = "loud"
println("$mood")
}
2 -> {
mood = "quiet"
println("$mood")
}
}
println("$mood")
The lines inside the when statement are printed, but when I run the last line, I get a "Variable 'mood' must be initialized" error.
I don't know what I could possibly be doing wrong here...
| Can't initialize variables inside of when statement in Kotlin | In Kotlin, variables declared with the val keyword must be initialized at the point of declaration or in the constructor of the class. In your code, the mood variable is declared without an initial value, and you are trying to assign values to it inside the when statement. However, the compiler is unable to determine if either of the branches will be executed at runtime, so it doesn't consider the variable as fully initialized.
To fix this issue, you can either declare the mood variable as a var instead of a val or assign an initial value to it when declaring it. Here's an updated version of your code using a var:
import kotlin.random.Random
import kotlin.random.nextInt
var mood: String
when (Random.nextInt(1..2)) {
1 -> {
mood = "loud"
println("$mood")
}
2 -> {
mood = "quiet"
println("$mood")
}
}
println("$mood")
By using a var instead of a val, you indicate that the variable can be reassigned later. Since the mood variable is assigned within both branches of the when statement, the compiler no longer complains about it being uninitialized.
Note that the order of the when branches should cover all possible cases, otherwise you might encounter a "when expression must be exhaustive" warning. In your case, the range of nextInt is 1 to 2, so the two branches should be sufficient.
|
76380728 | 76380908 | My deep link works fine on Android and transfers information to the app, but it doesn't work on iOS
Firebase Link
https://dvzpl.com
my short link
https://dvzpl.com/6BG2
my domain
https://dovizpanel.com/
my associated domain
<dict>
<key>aps-environment</key>
<string>development</string>
<key>com.apple.developer.associated-domains</key>
<array>
<string>webcredentials:dvzpl.com</string>
<string>applinks:dvzpl.com</string>
</array>
</dict>
how to fix ?
When I open the short link in the browser, it goes into the app but does not transfer the data in ios , android working not problams
<key>FirebaseDynamicLinksCustomDomains</key>
<array>
<string>https://dovizpanel.com/blog</string>
<string>https://dovizpanel.com/exchanger</string>
<string>https://dovizpanel.com/link</string>
</array>
| Flutter Deep Link Firebase in iOS | If you are using a custom domain for firebase dynamic links follow the instructions below:
In your Xcode project's Info.plist file, create a key called FirebaseDynamicLinksCustomDomains and set it to your app's Dynamic Links URL prefixes. For example:
<key>FirebaseDynamicLinksCustomDomains</key>
<array>
<string>https://dvzpl.com</string>
</array>
You can find more details directly in the Firebase documentation.
|
76384218 | 76384269 | my question is how can I toggle/display the "Some text" content on onClick individually?.
I can use different function and state for every div an it is working but I know this is not the correct way to do it .
Can you help me with this guys? Thanks
This is my code
function App() {
const [loaded, setLoaded] = useState(true);
const [show, setShow] = useState(false);
const handleShow = () => {
setShow(!show);
};
return (
<div className={styles.App}>
{loaded && (
<div className={styles.cards_container}>
<div className={styles.card_container} onClick={handleShow}>
<h3>Title</h3>
{show && (
<div>
<p>Some text</p>
</div>
)}
</div>
<div className={styles.card_container} onClick={handleShow}>
<h3>Title</h3>
{show && (
<div>
<p>Some text</p>
</div>
)}
</div>
<div className={styles.card_container} onClick={handleShow}>
<h3>Title</h3>
{show && (
<div>
<p>Some text</p>
</div>
)}
</div>
</div>
)}
</div>
);
}
| How to toggle/display content individually in ReactJS | You could create a custom component for your card that handles the state for each card:
function Card() {
const [show, setShow] = useState(false);
const handleShow = () => {
setShow(state => !state);
};
return <div className={styles.card_container} onClick={handleShow}>
<h3>Title</h3>
{show && (
<div>
<p>Some text</p>
</div>
)}
</div>
}
And use it in your app:
function App() {
const [loaded, setLoaded] = useState(true);
return (
<div className={styles.App}>
{loaded && (
<div className={styles.cards_container}>
<Card />
<Card />
<Card />
</div>
)}
</div>
);
}
|
76383839 | 76384284 | The following query was used as part of a security audit to identify users with access to install/uninstall server plugins at the database level.
SELECT user, host FROM mysql.db WHERE db = 'mysql' and (insert_priv='y') or (delete_priv='y') or (insert_priv='y' and delete_priv='y');
I need to revoke that permission from the users that are listed. Is there a specific privilege I revoke to do this? If so, I can't find it. Or would I simply UPDATE the insert_priv and delete_priv fields directly in the mysql.db table? I'm not a DBA but the closest thing we have at the moment.
| Revoking permission to install plugins? | You are able to install plugins when you have INSERT permissions on the mysql.plugin table, see INSTALL PLUGIN:
To use INSTALL PLUGIN, you must have the INSERT privilege for the mysql.plugin table.
So when you have database wide INSERT permissions on the (internal administrative) database mysql, then you can install plugins.
The same goes for the UNINSTALL PLUGIN statement, see UNINSTALL PLUGIN
To use UNINSTALL PLUGIN, you must have the DELETE privilege for the mysql.plugin table.
Remove the insert_priv and delete_priv privileges for the mysql database, your "normal" MySQL user accounts shouldn't be able to write in this database anyway.
|
76378670 | 76378715 | I am new to pandas, I have this data frame:
df['educ1']
which gives
1 4
2 3
3 3
4 4
5 1
..
28461 3
28462 2
28463 3
28464 2
28465 4
Name: educ1, Length: 28465, dtype: int64
when I try querying with
dt=df[df.educ1 > 1]
It's working fine returning multiple rows, but when I try
college_grad_mask=(df.educ1 > 1)
df.where(college_grad_mask).dropna().head()
It gives 0 rows, I wonder what is wrong here?
| pandas dataframe query not working with where | You likely have NaNs in many columns, try to subset:
df.where(college_grad_mask).dropna(subset=['educ1']).head()
Or better:
df[college_grad_mask].head()
|
76378383 | 76378734 | I'm learning tidymodels. The following code runs nicely:
library(tidyverse)
library(tidymodels)
# Draw a random sample of 2000 to try the models
set.seed(1234)
diamonds <- diamonds %>%
sample_n(2000)
diamonds_split <- initial_split(diamonds, prop = 0.80, strata="price")
diamonds_train <- training(diamonds_split)
diamonds_test <- testing(diamonds_split)
folds <- rsample::vfold_cv(diamonds_train, v = 10, strata="price")
metric <- metric_set(rmse,rsq,mae)
# Model KNN
knn_spec <-
nearest_neighbor(
mode = "regression",
neighbors = tune("k"),
engine = "kknn"
)
knn_rec <-
recipe(price ~ ., data = diamonds_train) %>%
step_log(all_outcomes()) %>%
step_normalize(all_numeric_predictors()) %>%
step_dummy(all_nominal_predictors())
knn_wflow <-
workflow() %>%
add_model(knn_spec) %>%
add_recipe(knn_rec)
knn_grid = expand.grid(k=c(1,5,10,30))
knn_res <-
tune_grid(
knn_wflow,
resamples = folds,
metrics = metric,
grid = knn_grid
)
collect_metrics(knn_res)
autoplot(knn_res)
show_best(knn_res,metric="rmse")
# Best KNN
best_knn_spec <-
nearest_neighbor(
mode = "regression",
neighbors = 10,
engine = "kknn"
)
best_knn_wflow <-
workflow() %>%
add_model(best_knn_spec) %>%
add_recipe(knn_rec)
best_knn_fit <- last_fit(best_knn_wflow, diamonds_split)
collect_metrics(best_knn_fit)
But when I try to fit the best model on the training set and applying it to the test set I run into problems. The following two lines give me the error : "Error in step_log():
! The following required column is missing from new_data in step 'log_mUSAb': price.
Run rlang::last_trace() to see where the error occurred."
# Predict Manually
f1 = fit(best_knn_wflow,diamonds_train)
p1 = predict(f1,new_data=diamonds_test)
| Problem when scoring new data -- tidymodels | This problem is related to log transform outcome variable in tidymodels workflow
For log transformations to the outcome, we strongly recommend that those transformation be done before you pass them to the recipe(). This is because you are not guaranteed to have an outcome when predicting (which is what happens when you last_fit() a workflow) on new data. And the recipe fails.
You are seeing this here as when you predict on a workflow() object, it only passes the predictors, as it is all that it needs. Hence why you see this error.
Since log transformations isn't a learned transformation you can safely do it before.
diamonds_train$price <- log(diamonds_train$price)
if (!is.null(diamonds_test$price)) {
diamonds_test$price <- log(diamonds_test$price)
}
|
76380693 | 76380922 | Is it possible to name a term created in a formula? This is the scenario:
Create a toy dataset:
set.seed(67253)
n <- 100
x <- sample(c("A", "B", "C"), size = n, replace = TRUE)
y <- sapply(x, switch, A = 0, B = 2, C = 1) + rnorm(n, 2)
dat <- data.frame(x, y)
head(dat)
#> x y
#> 1 B 4.5014474
#> 2 C 4.0252796
#> 3 C 2.4958761
#> 4 C 0.6725571
#> 5 B 4.3364206
#> 6 C 3.9798909
Fit a regression model:
out <- lm(y ~ x, dat)
summary(out)
#>
#> Call:
#> lm(formula = y ~ x, data = dat)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -2.07296 -0.52161 -0.03713 0.53898 2.12497
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 2.1138 0.1726 12.244 < 2e-16 ***
#> xB 1.6772 0.2306 7.274 9.04e-11 ***
#> xC 0.5413 0.2350 2.303 0.0234 *
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 0.9297 on 97 degrees of freedom
#> Multiple R-squared: 0.3703, Adjusted R-squared: 0.3573
#> F-statistic: 28.52 on 2 and 97 DF, p-value: 1.808e-10
Fit the model again, but use "C" as the reference group:
out2 <- lm(y ~ relevel(factor(x), ref = "C"), dat)
summary(out2)
#>
#> Call:
#> lm(formula = y ~ relevel(factor(x), ref = "C"), data = dat)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -2.07296 -0.52161 -0.03713 0.53898 2.12497
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 2.6551 0.1594 16.653 < 2e-16 ***
#> relevel(factor(x), ref = "C")A -0.5413 0.2350 -2.303 0.0234 *
#> relevel(factor(x), ref = "C")B 1.1359 0.2209 5.143 1.41e-06 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 0.9297 on 97 degrees of freedom
#> Multiple R-squared: 0.3703, Adjusted R-squared: 0.3573
#> F-statistic: 28.52 on 2 and 97 DF, p-value: 1.808e-10
The variable, x, was re-leveled in the second call to lm(). This is done in the formula and so the name of this term is relevel(factor(x), ref = "C").
Certainly, we can create the term before calling lm(), e.g.:
dat$x2 <- relevel(factor(x), ref = "C")
out3 <- lm(y ~ x2, dat)
summary(out3)
#>
#> Call:
#> lm(formula = y ~ x2, data = dat)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -2.07296 -0.52161 -0.03713 0.53898 2.12497
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 2.6551 0.1594 16.653 < 2e-16 ***
#> x2A -0.5413 0.2350 -2.303 0.0234 *
#> x2B 1.1359 0.2209 5.143 1.41e-06 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 0.9297 on 97 degrees of freedom
#> Multiple R-squared: 0.3703, Adjusted R-squared: 0.3573
#> F-statistic: 28.52 on 2 and 97 DF, p-value: 1.808e-10
However, can I create a term and name it in the formula? If yes, how?
| How to name a term created in the formula when calling `lm()`? | adapted from the info in this comment : Rename model terms in lm object for forecasting
set.seed(67253)
n <- 100
x <- sample(c("A", "B", "C"), size = n, replace = TRUE)
y <- sapply(x, switch, A = 0, B = 2, C = 1) + rnorm(n, 2)
dat <- data.frame(x, y)
out <- lm(y ~ x, dat)
summary(out)
out2 <- lm(y ~ x2, transform(dat,
x2=relevel(factor(x), ref = "C")))
summary(out2)
|
76378708 | 76378750 | I am trying to translate a Stata code from a paper into R.
The Stata code looks like this:
g tau = year - temp2 if temp2 > temp3 & (bod<. | do<. | lnfcoli<.)
My R translation looks like this:
data <- data %>%
mutate(tau = if_else((temp2 > temp3) &
(is.na(bod) | is.na(do) | is.na(lnfcoli)),
year - temp2,
NA_integer_))
The problem is that when I run each code I get different results.
This is the result I get when I run the code in Stata:
1 Year | temp2 | temp3 | bod | do | lnfcoli | tau |
2 1986 | 1995 | 1986 | 3.2 | 7.2 | 2.1. | -9 |
This is the result I get when I run the code in R:
1 Year | temp2 | temp3 | bod | do | lnfcoli | tau |
2 1986 | 1995 | 1986 | 3.2 | 7.2 | 2.1. | NA |
Do you know what might be wrong with my R code or what should I modify to get the same output?
| Translating Stata to R yields different results | None of bod, do or lnfcoli are missing (NA), so your logic returns FALSE and returns NA_integer_ (false= in the if_else). Stata treats . or missing values as positive infinity, so that check is actually looking for not missing.
So the equivalent in R/dplyr is probably:
data %>%
mutate(
tau = if_else(
(temp2 > temp3) & (!(is.na(bod) | is.na(do) | is.na(lnfcoli))),
year-temp2,
NA_integer_
)
)
# year temp2 temp3 bod do lnfcoli tau
#1 1986 1995 1986 3.2 7.2 2.1 -9
|
76383859 | 76384297 | This c++ code cannot compile:
#include <iostream>
int main()
{
constexpr int kInt = 123;
struct LocalClass {
void func(){
const int b = std::max(kInt, 12);
// ^~~~
// error: use of local variable with automatic storage from containing function
std::cout << b;
}
};
LocalClass a;
a.func();
return 0;
}
But this works:
#include <iostream>
#include <vector>
int main()
{
constexpr int kInt = 123;
struct LocalClass {
void func(){
const int b = std::max((int)kInt, 12); // added an extra conversion "(int)"
std::cout << b;
const int c = kInt; // this is also ok
std::cout << c;
const auto d = std::vector{kInt}; // also works
std::cout << d[0];
}
};
LocalClass a;
a.func();
return 0;
}
Tested under C++17 and C++20, same behaviour.
| Why sometimes local class cannot access constexpr variables defined in function scope | 1. odr-using local entities from nested function scopes
Note that kInt still has automatic storage duration - so it is a local entity as per:
6.1 Preamble [basic.pre]
(7) A local entity is a variable with automatic storage duration, [...]
In general local entities cannot be odr-used from nested function definitions (as in your LocalClass example)
This is given by:
6.3 One-definition rule [basic.def.odr]
(10) A local entity is odr-usable in a scope if:
[...]
(10.2) for each intervening scope between the point at which the entity is introduced and the scope (where *this is considered to be introduced within the innermost enclosing class or non-lambda function definition scope), either:
the intervening scope is a block scope, or
the intervening scope is the function parameter scope of a lambda-expression that has a simple-capture naming the entity or has a capture-default, and the block scope of the lambda-expression is also an intervening scope.
If a local entity is odr-used in a scope in which it is not odr-usable, the program is ill-formed.
So the only times you can odr-use a local variable within a nested scope are nested block scopes and lambdas which capture the local variable.
i.e.:
void foobar() {
int x = 0;
{
// OK: x is odr-usable here because there is only an intervening block scope
std::cout << x << std::endl;
}
// OK: x is odr-usable here because it is captured by the lambda
auto l = [&]() { std::cout << x << std::endl; };
// NOT OK: There is an intervening function definition scope
struct K {
int bar() { return x; }
};
}
11.6 Local class declarations [class.local] contains a few examples of what is and is not allowed, if you're interested.
So if use of kInt constitutes an odr-use, your program is automatically ill-formed.
2. Is naming kInt always an odr-use?
In general naming a variable constitutes an odr-use of that variable:
6.3 One-definition rule [basic.def.odr]
(5) A variable is named by an expression if the expression is an id-expression that denotes it. A variable x that is named by a potentially-evaluated expression E is odr-used by E unless [...]
But because kInt is a constant expression the special exception (5.2) could apply:
6.3 One-definition rule [basic.def.odr]
(5.2) x is a variable of non-reference type that is usable in constant expressions and has no mutable subobjects, and E is an element of the set of potential results of an expression of non-volatile-qualified non-class type to which the lvalue-to-rvalue conversion is applied, or
So naming kInt is not deemed an odr-use as long as it ...
is of non-reference type (β)
is usable in constant expressions (β)
does not contain mutable members (β)
and the expression that contains kInt ...
must produce a non-volatile-qualified non-class type (β)
must apply the lvalue-to-rvalue conversion (?)
So we pass almost all the checks for the naming of kInt to not be an odr-use, and therefore be well-formed.
The only condition that is not always true in your example is the lvalue-to-rvalue conversion that must happen.
If the lvalue-to-rvalue conversion does not happen (i.e. no temporary is introduced), then your program is ill-formed - if it does happen then it is well-formed.
// lvalue-to-rvalue conversion will be applied to kInt:
// (well-formed)
const int c = kInt;
std::vector v{kInt}; // vector constructor takes a std::size_t
// lvalue-to-rvalue conversion will NOT be applied to kInt:
// (it is passed by reference to std::max)
// (ill-formed)
std::max(kInt, 12); // std::max takes arguments by const reference (!)
This is also the reason why std::max((int)kInt, 12); is well-formed - the explicit cast introduces a temporary variable due to the lvalue-to-rvalue conversion being applied.
|
76380850 | 76380929 | Let's say I have a React Select with a placeholder ('Selected Value: '), and I want to keep the placeholder and append it into the selected value so that it looks something like ('Selected Value: 1'). Is there any way to do it?
import Select from "react-select";
export default function App() {
const options = [
{ value: 1, label: 1 },
{ value: 2, label: 2 },
{ value: 3, label: 3 },
{ value: 4, label: 4 }
];
const placeholder = "Selected Value: ";
return (
<div className="App">
<Select options={options} placeholder={placeholder} />
</div>
);
}
codesandbox: https://codesandbox.io/s/brave-chatterjee-pjol2d?file=/src/App.js:23-385
EDIT: Sorry, forget to mention, I do not want the placeholder to directly be in the labels of the options
| How do I keep and append placeholder text into the selected value in React Select? | you can accept my answer
import Select from "react-select";
import { useState } from "react";
export default function App() {
const [selectBoxValue, setSelectBoxValue] = useState('')
const options = [
{ value: 1, label: 1 },
{ value: 2, label: 2 },
{ value: 3, label: 3 },
{ value: 4, label: 4 }
];
const placeholder = `Selected Value: ${selectBoxValue}`;
return (
<div className="App">
<Select
options={options}
placeholder={placeholder}
value={placeholder}
onChange={(event) => setSelectBoxValue(event.value)} />
</div>
);
}
|
76380934 | 76380982 | Installed FlareSolverr in docker.
cURL work correctly and return the correct response.
curl -L -X POST 'http://localhost:8191/v1' -H 'Content-Type: application/json' --data-raw '{
"cmd": "request.get",
"url":"http://google.com",
"maxTimeout": 60000
}'
but when using from python + flask I get an error - 405 Method is not allowed
def get_parsed_page(url, delay=0.5):
data = {
"cmd": "request.get",
"url": url,
"maxTimeout": 60000
}
headers = {"Content-Type": "application/json"}
time.sleep(delay)
print(requests.get("***:8191/v1", headers=headers, data=data))
return BeautifulSoup(requests.get("***:8191/v1", headers=headers, data=data).text, 'lxml')
| Method not allowed, flask, python | you are using a GET request in your python code. It should be a POST request. Use requests.post
|
76378592 | 76378760 | I was having this problem about last week in this code
a = int(input())
b = int(input())
c = int(input())
print(min(a+b,b+c,c+a))
so when I enter three input like this: 2 5 6 (three interger in 1 line)
It show me a error:
File "c:\Users\Administrator\Documents\Code\Python\baitap(LQDOJ)\EZMIN.py", line 1, in <module>
a = int(input())
ValueError: invalid literal for int() with base 10: '2 5 6'
and I see that it only identify 'a' but not identify 'b' , 'c' so can you show me how to fix it or are there other ways to write it in 1 line?
| Can't not write three value in 1 line | Method 1
The error you're encountering is because you're trying to convert the entire string '2 5 6' into an integer using the int() function. However, the int() function expects a single integer value, not a string containing multiple numbers.
code:
a = int(input())
b = int(input())
c = int(input())
x = a + b
y = b + c
z = c + a
min_value = x
if y < min_value:
min_value = y
if z < min_value:
min_value = z
print("The minimum value is:", min_value)
you'll be prompted to enter the values for a, b, and c separately, and the code will correctly calculate and display the minimum value among the three sums.
Method 2
Using This one is more optimize solution
input_values = input()
input_list = list(map(int, input_values.split()))
min_value = min(input_list[0] + input_list[1], input_list[1] + input_list[2], input_list[2] + input_list[0])
print("The minimum value is:", min_value)
The split() method splits the input string at spaces, creating a list of string elements.
The map() function applies the int() function to each element of the split list, converting them into integers.
list() is used to convert the resulting map object into a list of integers.
The resulting list is stored in input_list for further calculations.
|
76383945 | 76384326 | I try to define a custom interfaces like this :
export interface IAPIRequest<B extends any, P extends any, Q extends any>
{
body: B;
params: P;
query: Q;
}
This type is supposed to be extended in a lot of other types for each request mu API is supposed to handle.
For example :
export interface ILoginRequest extends IAPIRequest<{ email: string; password: string; }>, undefined, undefined> {}
It works a little but everytime I use this interface, I must provide all the properties even if they are undefined.
Example:
const login = async ({ body }: ILoginRequest) =>
{
...
}
const response = await login({ body: { email: 'mail@test.com', password: 'verystrongpassword' }, params: undefined, query: undefined });
It doesn't work if I don't provide the undefined properties.
How can I define an abstract type for IAPIRequest that would avoid me from providing undefined values ?
PS : I've tried this as well
export interface IAPIRequest<B extends any, P extends any, Q extends any>
{
body?: B;
params?: P;
query?: Q;
}
Even for IAPIRequest<B, P, Q> where none of B, P, or Q allow undefined, I still get that the properties might be undefined
| Typescript type extension | TypeScript doesn't automatically treat properties that accept undefined to be optional (although the converse, treating optional properties as accepting undefined, is true, unless you've enabled --exactOptionalPropertyTypes). There is a longstanding open feature request for this at microsoft/TypeScript#12400 (the title is about optional function parameters, not object properties, but the issue seems to have expanded to include object properties also). Nothing has been implemented there, although the discussion describes various workarounds.
Let's define our own workaround; a utility type UndefinedIsOptional<T> that produces a version of T such that any property accepting undefined is optional. It could look like this:
type UndefinedIsOptional<T extends object> = (Partial<T> &
{ [K in keyof T as undefined extends T[K] ? never : K]: T[K] }
) extends infer U ? { [K in keyof U]: U[K] } : never
That's a combination of Partial<T> which turns all properties optional, and a key remapped type that suppresses all undefined-accepting properties. The intersection of those is essentially what you want (an intersection of an optional prop and a required prop is a required prop) but I use a technique described at How can I see the full expanded contract of a Typescript type? to display the type in a more palatable manner.
Then we can define your type as
type IAPIRequest<B, P, Q> = UndefinedIsOptional<{
body: B;
params: P;
query: Q;
}>
and note that this must be a type alias and not an interface because the compiler needs to know exactly which properties will appear (and apparently their optional-ness) to be an interface. This won't matter much with your example code but you should be aware of it.
Let's test it out:
type ILR = IAPIRequest<{ email: string; password: string; }, undefined, undefined>
/* type ILR = {
body: {
email: string;
password: string;
};
params?: undefined;
query?: undefined;
} */
That looks like what you wanted, so you can define your ILoginRequest interface:
interface ILoginRequest extends IAPIRequest<
{ email: string; password: string; }, undefined, undefined> {
}
Also, let's just look at what happens when the property includes undefined but is not only undefined:
type Other = IAPIRequest<{ a: string } | undefined, number | undefined, { b: number }>;
/* type Other = {
body?: {
a: string;
} | undefined;
params?: number | undefined;
query: {
b: number;
};
} */
Here body and params are optional because undefined is possible, but query is not because undefined is impossible.
Playground link to code
|
76380868 | 76380985 | This Quarkus mailer guide requires that the sending email is preconfigured in property file: quarkus.mailer.from=YOUREMAIL@gmail.com. However, my use case for email includes unique originator email based on user. Using the provided method looks something like:
public void sendEmail(EmailSender emailSender) {
// Send to each recipient
emailMessageRepository.findByEmailSenderId(emailSender.getId())
.forEach(emailMessage ->
mailer.send(
Mail.withText(emailMessage.getEmail(),
emailSender.getSubject(),
emailSender.getMessage())
);
);
}
How can I include the sender's email address (i.e. 'from') when the Mail.withText() method only provides for recipient email?
| How to configure the Quarkus Mailer extension to allow dynamic 'from' email addresses based on user? | The documention showcases how to use multimailer (Multiple From Addresses)
quarkus.mailer.from=your-from-address@gmail.com
quarkus.mailer.host=smtp.gmail.com
quarkus.mailer.aws.from=your-from-address@gmail.com
quarkus.mailer.aws.host=${ses.smtp}
quarkus.mailer.aws.port=587
quarkus.mailer.sendgrid.from=your-from-address@gmail.com
quarkus.mailer.sendgrid.host=${sendgrid.smtp-host}
quarkus.mailer.sendgrid.port=465
So you would write:
quarkus.mailer.from=default@gmail.com
quarkus.mailer.aws.from=your_aws@gmail.com
quarkus.mailer.sendgrid.from=your_sendgrid@gmail.com
Then you would inject them as shown below and use them based on whom you want to send with:
@Inject
@MailerName("aws")
Mailer mailer;
@Inject
@MailerName("sendgrid")
Mailer mailer;
aws and sendgrid at the names between quarkus.mailer.xxx.from
https://quarkus.io/guides/mailer-reference#multiple-mailer-configurations
The Quarkus Mailer is implemented on top of the Vert.x Mail Client,
providing an asynchronous and non-blocking way to send emails.
If you need fine control on how the mail is sent, for instance if you need to retrieve the message ids, you can inject the underlying client, and use it directly:
@Inject MailClient client;
Then use it:
MailMessage message = new MailMessage();
message.setFrom("user@example.com (Example User)");
message.setTo("recipient@example.org");
message.setCc("Another User <another@example.net>");
message.setText("this is the plain message text");
message.setHtml("this is html text <a href=\"http://vertx.io\">vertx.io</a>");
To send using MailClient:
mailClient.sendMail(message)
.onSuccess(System.out::println)
.onFailure(Throwable::printStackTrace);
https://quarkus.io/guides/mailer-reference#using-the-underlying-vert-x-mail-client
https://vertx.io/docs/vertx-mail-client/java/
|
76380847 | 76380988 | This is a question from rust onomicon # lifetime
The first example can compile, as x is a reference and the compiler can infer its lifetime as minimal as the last use here :println!(), so x is dropped after this line.
let mut data = vec![1, 2, 3];
let x = &data[0];
println!("{}", x);
// This is OK, x is no longer needed
data.push(4);
But the case is different when x is a struct implemented Drop trait.
#[derive(Debug)]
struct X<'a>(&'a i32);
impl Drop for X<'_> {
fn drop(&mut self) {}
}
let mut data = vec![1, 2, 3];
let x = X(&data[0]);
println!("{:?}", x);
data.push(4);
// Here, the destructor is run and therefore this'll fail to compile.
The onomicon says in this case, drop() is only executed at the very end of a scope, so x keeps valid until the last line.
But why the compiler cannot minimize the lifetime of x to the last use? And is applying drop() just after the last use has some nontrivial side effects when x is implemented Drop trait?
| Why Drop trait is only executed at the end of the scope, instead of after the last use? | The primary reason is that it was once defined to be like that, and now changing it isn't possible any more because it wouldn't be backwards-compatible and might break stuff.
Your code is easily fixable by introducing a nested scope, though, which is (to my understanding) best practice in those situations:
#[derive(Debug)]
struct X<'a>(&'a i32);
impl Drop for X<'_> {
fn drop(&mut self) {}
}
fn main() {
let mut data = vec![1, 2, 3];
{
let x = X(&data[0]);
println!("{:?}", x);
}
data.push(4);
}
X(1)
Alternatively, you could drop it manually:
#[derive(Debug)]
struct X<'a>(&'a i32);
impl Drop for X<'_> {
fn drop(&mut self) {}
}
fn main() {
let mut data = vec![1, 2, 3];
let x = X(&data[0]);
println!("{:?}", x);
drop(x);
data.push(4);
}
X(1)
|
76384211 | 76384348 | I've a microservice architecture, and need some common logic.
When it's necessary to create a custom spring boot starter rather than create a new microservice?
| When it's necessary to create a custom spring boot starter rather than create a new microservice? | In my experience, creating a new microservice from the ground up is generally due to preventing any monoliths occurring. Microservices should generally have one job and then do it well. You don't want to muddy up the implementation and purpose of your microservice by adding unrelated operations.
There are many design patterns for the "types" you could be creating but I won't go into too much detail there. Overall, based on what business purpose you are solving you can select your design and begin development. Different designs should be separated and not combined into monolithic styles. Here is a good article showcasing design options: https://www.openlegacy.com/blog/microservices-architecture-patterns/
If you find your self having to re-create multiple microservice serving different use cases you can always utilize a tool such as yeoman to speed up creating these new projects. You can build a generator that will give you a working template so you don't have to spend the time re developing from the ground up each time you need a different service.
Here is a guide that I wrote recently on creating your own yeoman generator: https://medium.com/@dylanlamott/building-a-yeoman-generator-line-by-line-6966debb39a3
|
76378628 | 76378769 | AttributeError: 'int' object has no attribute 'astype' in automatic WhatsApp message sender script
The following is an automated WhatsApp message sender script I partially developed. I tried the following script and it worked fine with an excel with 5 numbers in it. However, I tried upscaling it to 1700+ numbers, and I get the following traceback:
Traceback (most recent call last):
File "c:\Users\MSI\Desktop\AutoSenderPY\main.py", line 9, in <module>
cellphone = data.loc[i,'Cellphone'].astype(str)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'int' object has no attribute 'astype'*
The script is the following:
import pandas as pd
import webbrowser as web
import pyautogui as pg
import time
data = pd.read_excel("book1.xlsx", sheet_name='sheet1')
for i in range(len(data)):
cellphone = data.loc[i,'Cellphone'].astype(str)
message = "Test Message"
web.open("https://web.whatsapp.com/send?phone=" + cellphone + "&text=" + message)
time.sleep(5.5)
pg.click(1230,964)
time.sleep(1)
pg.press('enter')
time.sleep(2)
pg.hotkey('ctrl', 'w')
time.sleep(1)
Why is that happening, and how can I get it working for those 1700+ numbers?
| How to fix 'int' object has no attribute 'astype' error when sending WhatsApp messages to large number of contacts using Python and pandas? | Try using -
cellphone = str(data.loc[i,'Cellphone'])
I think loc returns a single element of type "numpy.int64", calling the "str" should be enough.
|
76378370 | 76378800 | I have two tables, one has course name and course ID. The second table has the ID of the students and the course ID they have taken. I need to find all the class IDβs of the classes a student hasnβt taken. For example, in table 2 student 03 has taken classes 01 and 02 but not 03 and 04 from table one. The course IDβs 03 and 04 from table one are what I need to return (all the classes student 03 hasn't taken). I've tried numerous queries and the last one I tried is:
SELECT table1.* FROM table1
LEFT JOIN table2
ON
table1.course_ID = table2.course_ID
WHERE
table2.course_ID IS NULL
AND
table2.user_ID != 3
Appreciate your help!
table 1
course_ID
courseName
01
math
02
English
03
art
04
music
table 2
cert_Id
course_ID
user_ID
01
01
03
02
02
03
| SQL How to return record ID's not included in table 2 from table 1 based off of user ID in table 2 | As per your current requirement below query will work
SELECT * FROM table1 t1
WHERE course_ID
NOT IN (SELECT course_ID FROM table2 WHERE user_ID =3)
If you have more records in table2 and if you need to populate more than one student's details then you have to use other logic
If you want to modify your query then use as below
SELECT table1.* FROM table1
LEFT JOIN table2 ON table1.course_ID = table2.course_ID
AND table2.user_ID = 3
WHERE table2.course_ID IS NULL
|
76380967 | 76381059 | In T-Sql I am parsing JSON and using PIVOT.
Select * from (select [key],convert(varchar,[value])[value]
from openjson ('{"Name":"tew","TabTypeId":9,"Type":3}'))A
pivot(max(value) for [key] in ([Name],tabTypeId,[Type]))b
It is not treating tabTypeId as equal to TabTypeId. I am getting NULL for tabTypeId.
If I use TabTypeId I get the value 9.
Why is it happening?
| Why is SQL Server Pivot being case sensitive on TabTypeId instead of treating it as the actual column name? | It's not PIVOT that is case sensitive, it's the data returned from OPENJSON that is. If you check the data returned from it, you'll see that the column key is a binary collation:
SELECT name, system_type_name, collation_name
FROM sys.dm_exec_describe_first_result_set(N'SELECT [key], CONVERT(varchar, [value]) AS [value] FROM OPENJSON(''{"Name":"tew","TabTypeId":9,"Type":3}'');',NULL,NULL)
name
system_type_name
collation_name
key
nvarchar(4000)
Latin1_General_BIN2
value
varchar(30)
SQL_Latin1_General_CP1_CI_AS
For binary collations the actual bytes of the characters must match. As such N'tabTypeId' and N'TabTypeId' are not equal as N'T' and N't' have the binary values 0x5400 and 0x7400.
Though I am unsure why you are using PIVOT at all; just define your columns in your OPENJSON call:
SELECT name, --Columns are intentionally demonstrating non-case sensitivity
tabTypeId,
type
FROM OPENJSON('{"Name":"tew","TabTypeId":9,"Type":3}')
WITH (Name varchar(3),
TabTypeId int,
Type int);
Note that in the WITH clause of OPENJSON the column names are still case sensitive. tabTypeId int would also yield NULL. If you "had" to have a column called tabTypeId defined prior to the SELECT you would use tabTypeId int '$.TabTypeId' instead.
|
76384091 | 76384360 | I have a query here that uses four subqueries inside a single CTE, and each subquery is scanning every row of another CTE for each row in itself. I would think that this is very inefficient.
Are there any SQL optimizations that I can implement now that the proof of concept is finished?
I don't have write access to the database, so optimizations would be required within the select clause.
WITH datetable AS (
SELECT generate_series(
DATE_TRUNC('week', (SELECT MIN(created_at) FROM org_accounts.deleted_users)),
DATE_TRUNC('week', now()),
'1 week'::INTERVAL
)::DATE AS week_start
), all_users AS (
SELECT
id,
registered_at,
NULL AS deleted_at
FROM org_accounts.users
WHERE status = 'active'
AND org_accounts.__user_is_qa(id) <> 'Y'
AND email NOT LIKE '%@org%'
UNION ALL
SELECT
id,
created_at AS registered_at,
deleted_at
FROM org_accounts.deleted_users
WHERE deleter_id = id
AND email NOT LIKE '%@org%'
), weekly_activity AS (
SELECT
DATE_TRUNC('week', date)::DATE AS week_start,
COUNT(DISTINCT user_id) AS weekly_active_users
FROM (
SELECT user_id, date
FROM org_storage_extra.stats_user_daily_counters
WHERE type in ('created_file', 'created_folder', 'created_secure_fetch')
UNION ALL
SELECT user_id, date
FROM ipfs_pinning_facility.stats_user_daily_counters
WHERE type <> 'shares_viewed_by_others'
) activity_ids_dates
WHERE EXISTS(SELECT 1 from all_users WHERE id = user_id)
GROUP BY week_start
), preprocessed AS (
SELECT
week_start,
(
SELECT COUNT(DISTINCT id)
FROM all_users
WHERE registered_at < week_start
AND (deleted_at IS NULL OR deleted_at > week_start)
) AS actual_users,
(
SELECT COUNT(DISTINCT id)
FROM all_users
WHERE deleted_at < week_start + '1 week'::INTERVAL
) AS cumulative_churned_users,
(
SELECT COUNT(DISTINCT id)
FROM all_users
WHERE registered_at >= week_start
AND registered_at < week_start + '1 week'::INTERVAL
) AS weekly_new_users,
(
SELECT COUNT(DISTINCT id)
FROM all_users
WHERE deleted_at >= week_start
AND deleted_at < week_start + '1 week'::INTERVAL
) AS weekly_churned_users,
COALESCE(weekly_active_users, 0) AS weekly_active_users
FROM datetable dt
LEFT JOIN weekly_activity USING (week_start)
ORDER BY week_start DESC
)
SELECT
week_start AS for_week_of,
actual_users + cumulative_churned_users AS cumulative_users,
cumulative_churned_users,
cumulative_churned_users::FLOAT / NULLIF((actual_users + cumulative_churned_users)::FLOAT, 0) AS cumulated_churn_rate,
actual_users,
weekly_new_users,
weekly_churned_users,
weekly_active_users,
weekly_churned_users::FLOAT / NULLIF(actual_users::FLOAT, 0) AS weekly_churn_rate
FROM preprocessed;
Results of query analysis:
QUERY PLAN
Subquery Scan on preprocessed (cost=40875.45..7501783.95 rows=1000 width=68) (actual time=1553.471..13613.116 rows=231 loops=1)
Output: preprocessed.week_start, (preprocessed.actual_users + preprocessed.cumulative_churned_users), preprocessed.cumulative_churned_users, ((preprocessed.cumulative_churned_users)::double precision / NULLIF(((preprocessed.actual_users + preprocessed.cumulative_churned_users))::double precision, '0'::double precision)), preprocessed.actual_users, preprocessed.weekly_new_users, preprocessed.weekly_churned_users, preprocessed.weekly_active_users, ((preprocessed.weekly_churned_users)::double precision / NULLIF((preprocessed.actual_users)::double precision, '0'::double precision))
Buffers: shared hit=287734 read=1964, temp read=274840 written=873
CTE all_users
-> Append (cost=0.00..30953.99 rows=70293 width=32) (actual time=0.099..1313.372 rows=71228 loops=1)
Buffers: shared hit=285995 read=1964
-> Seq Scan on org_accounts.users (cost=0.00..27912.65 rows=70009 width=32) (actual time=0.099..1289.469 rows=70007 loops=1)
Output: users.id, users.registered_at, NULL::timestamp with time zone
Filter: ((users.email !~~ '%@mailinator%'::text) AND (users.email !~~ '%@org%'::text) AND (users.email !~~ '%testaccnt%'::text) AND (users.status = 'active'::text) AND ((org_accounts.__user_is_qa(users.id))::text <> 'Y'::text))
Rows Removed by Filter: 9933
Buffers: shared hit=285269 read=1964
-> Seq Scan on org_accounts.deleted_users (cost=0.00..1986.94 rows=284 width=32) (actual time=0.014..14.267 rows=1221 loops=1)
Output: deleted_users.id, deleted_users.created_at, deleted_users.deleted_at
Filter: ((deleted_users.email !~~ '%@mailinator%'::text) AND (deleted_users.email !~~ '%@org%'::text) AND (deleted_users.email !~~ '%testaccnt%'::text) AND (deleted_users.deleter_id = deleted_users.id))
Rows Removed by Filter: 61826
Buffers: shared hit=726
-> Merge Left Join (cost=9921.47..7470794.97 rows=1000 width=44) (actual time=1553.467..13612.496 rows=231 loops=1)
Output: (((generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval)))::date), (SubPlan 2), (SubPlan 3), (SubPlan 4), (SubPlan 5), COALESCE(weekly_activity.weekly_active_users, '0'::bigint)
Inner Unique: true
Merge Cond: ((((generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval)))::date) = weekly_activity.week_start)
Buffers: shared hit=287734 read=1964, temp read=274840 written=873
-> Sort (cost=1601.45..1603.95 rows=1000 width=4) (actual time=10.108..10.250 rows=231 loops=1)
Output: (((generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval)))::date)
Sort Key: (((generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval)))::date) DESC
Sort Method: quicksort Memory: 35kB
Buffers: shared hit=726
-> Result (cost=1514.10..1541.62 rows=1000 width=4) (actual time=9.986..10.069 rows=231 loops=1)
Output: ((generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval)))::date
Buffers: shared hit=726
InitPlan 6 (returns $5)
-> Aggregate (cost=1514.09..1514.10 rows=1 width=8) (actual time=9.974..9.975 rows=1 loops=1)
Output: min(deleted_users_1.created_at)
Buffers: shared hit=726
-> Seq Scan on org_accounts.deleted_users deleted_users_1 (cost=0.00..1356.47 rows=63047 width=8) (actual time=0.006..4.332 rows=63047 loops=1)
Output: deleted_users_1.id, deleted_users_1.email, deleted_users_1.created_at, deleted_users_1.deleter_id, deleted_users_1.deleted_at, deleted_users_1.registration_app
Buffers: shared hit=726
-> ProjectSet (cost=0.00..5.03 rows=1000 width=8) (actual time=9.984..10.030 rows=231 loops=1)
Output: generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval)
Buffers: shared hit=726
-> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.000..0.001 rows=1 loops=1)
-> Sort (cost=8320.02..8320.52 rows=200 width=12) (actual time=1475.315..1475.418 rows=159 loops=1)
Output: weekly_activity.weekly_active_users, weekly_activity.week_start
Sort Key: weekly_activity.week_start DESC
Sort Method: quicksort Memory: 32kB
Buffers: shared hit=287008 read=1964, temp read=412 written=872
-> Subquery Scan on weekly_activity (cost=8050.90..8312.37 rows=200 width=12) (actual time=1466.686..1475.279 rows=159 loops=1)
Output: weekly_activity.weekly_active_users, weekly_activity.week_start
Buffers: shared hit=287008 read=1964, temp read=412 written=872
-> GroupAggregate (cost=8050.90..8310.37 rows=200 width=12) (actual time=1466.685..1475.254 rows=159 loops=1)
Output: ((date_trunc('week'::text, ("*SELECT* 1".date)::timestamp with time zone))::date), count(DISTINCT "*SELECT* 1".user_id)
Group Key: ((date_trunc('week'::text, ("*SELECT* 1".date)::timestamp with time zone))::date)
Buffers: shared hit=287008 read=1964, temp read=412 written=872
-> Sort (cost=8050.90..8136.22 rows=34130 width=20) (actual time=1466.668..1468.872 rows=23005 loops=1)
Output: ((date_trunc('week'::text, ("*SELECT* 1".date)::timestamp with time zone))::date), "*SELECT* 1".user_id
Sort Key: ((date_trunc('week'::text, ("*SELECT* 1".date)::timestamp with time zone))::date)
Sort Method: quicksort Memory: 2566kB
Buffers: shared hit=287008 read=1964, temp read=412 written=872
-> Hash Join (cost=1586.09..5481.12 rows=34130 width=20) (actual time=1411.350..1462.022 rows=23005 loops=1)
Output: (date_trunc('week'::text, ("*SELECT* 1".date)::timestamp with time zone))::date, "*SELECT* 1".user_id
Inner Unique: true
Hash Cond: ("*SELECT* 1".user_id = all_users.id)
Buffers: shared hit=287008 read=1964, temp read=412 written=872
-> Append (cost=0.00..3080.17 rows=68261 width=20) (actual time=0.010..25.441 rows=68179 loops=1)
Buffers: shared hit=1013
-> Subquery Scan on "*SELECT* 1" (cost=0.00..1018.43 rows=21568 width=20) (actual time=0.008..7.895 rows=21532 loops=1)
Output: "*SELECT* 1".date, "*SELECT* 1".user_id
Buffers: shared hit=372
-> Seq Scan on org_storage_extra.stats_user_daily_counters (cost=0.00..802.75 rows=21568 width=20) (actual time=0.008..5.910 rows=21532 loops=1)
Output: stats_user_daily_counters.user_id, stats_user_daily_counters.date
Filter: (stats_user_daily_counters.type = ANY ('{created_file,created_folder,created_secure_fetch}'::text[]))
Rows Removed by Filter: 9795
Buffers: shared hit=372
-> Subquery Scan on "*SELECT* 2" (cost=0.00..1720.44 rows=46693 width=20) (actual time=0.009..12.460 rows=46647 loops=1)
Output: "*SELECT* 2".date, "*SELECT* 2".user_id
Buffers: shared hit=641
-> Seq Scan on ipfs_pinning_facility.stats_user_daily_counters stats_user_daily_counters_1 (cost=0.00..1253.51 rows=46693 width=20) (actual time=0.009..8.209 rows=46647 loops=1)
Output: stats_user_daily_counters_1.user_id, stats_user_daily_counters_1.date
Filter: (stats_user_daily_counters_1.type <> 'shares_viewed_by_others'::text)
Rows Removed by Filter: 2354
Buffers: shared hit=641
-> Hash (cost=1583.59..1583.59 rows=200 width=16) (actual time=1411.250..1411.251 rows=71228 loops=1)
Output: all_users.id
Buckets: 131072 (originally 1024) Batches: 2 (originally 1) Memory Usage: 3073kB
Buffers: shared hit=285995 read=1964, temp read=100 written=717
-> HashAggregate (cost=1581.59..1583.59 rows=200 width=16) (actual time=1383.986..1398.270 rows=71228 loops=1)
Output: all_users.id
Group Key: all_users.id
Batches: 5 Memory Usage: 4161kB Disk Usage: 1544kB
Buffers: shared hit=285995 read=1964, temp read=100 written=560
-> CTE Scan on all_users (cost=0.00..1405.86 rows=70293 width=16) (actual time=0.102..1351.241 rows=71228 loops=1)
Output: all_users.id
Buffers: shared hit=285995 read=1964, temp written=296
SubPlan 2
-> Aggregate (cost=1777.05..1777.06 rows=1 width=8) (actual time=20.197..20.197 rows=1 loops=231)
Output: count(DISTINCT all_users_1.id)
Buffers: temp read=68607 written=1
-> CTE Scan on all_users all_users_1 (cost=0.00..1757.33 rows=7888 width=16) (actual time=0.883..10.874 rows=27239 loops=231)
Output: all_users_1.id, all_users_1.registered_at, all_users_1.deleted_at
Filter: ((all_users_1.registered_at < (((generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval)))::date)) AND ((all_users_1.deleted_at IS NULL) OR (all_users_1.deleted_at > (((generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval)))::date))))
Rows Removed by Filter: 43989
Buffers: temp read=68607 written=1
SubPlan 3
-> Aggregate (cost=1815.90..1815.91 rows=1 width=8) (actual time=11.215..11.215 rows=1 loops=231)
Output: count(DISTINCT all_users_2.id)
Buffers: temp read=68607
-> CTE Scan on all_users all_users_2 (cost=0.00..1757.33 rows=23431 width=16) (actual time=11.009..11.150 rows=231 loops=231)
Output: all_users_2.id, all_users_2.registered_at, all_users_2.deleted_at
Filter: (all_users_2.deleted_at < ((((generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval)))::date) + '7 days'::interval))
Rows Removed by Filter: 70997
Buffers: temp read=68607
SubPlan 4
-> Aggregate (cost=1933.94..1933.95 rows=1 width=8) (actual time=14.515..14.515 rows=1 loops=231)
Output: count(DISTINCT all_users_3.id)
Buffers: temp read=68607
-> CTE Scan on all_users all_users_3 (cost=0.00..1933.06 rows=351 width=16) (actual time=2.264..14.424 rows=308 loops=231)
Output: all_users_3.id, all_users_3.registered_at, all_users_3.deleted_at
Filter: ((all_users_3.registered_at >= (((generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval)))::date)) AND (all_users_3.registered_at < ((((generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval)))::date) + '7 days'::interval)))
Rows Removed by Filter: 70920
Buffers: temp read=68607
SubPlan 5
-> Aggregate (cost=1933.94..1933.95 rows=1 width=8) (actual time=6.556..6.556 rows=1 loops=231)
Output: count(DISTINCT all_users_4.id)
Buffers: temp read=68607
-> CTE Scan on all_users all_users_4 (cost=0.00..1933.06 rows=351 width=16) (actual time=6.441..6.547 rows=5 loops=231)
Output: all_users_4.id, all_users_4.registered_at, all_users_4.deleted_at
Filter: ((all_users_4.deleted_at >= (((generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval)))::date)) AND (all_users_4.deleted_at < ((((generate_series(date_trunc('week'::text, $5), date_trunc('week'::text, now()), '7 days'::interval)))::date) + '7 days'::interval)))
Rows Removed by Filter: 71223
Buffers: temp read=68607
Planning Time: 0.612 ms
Execution Time: 13615.054 ms
| PSQL / SQL: Is it possible to further optimize this query with requiring write access to the database? | An obvious optimization is to eliminate redundant table scans. There isn't any need in preprocessed to query from all_users more than once. The following query uses COUNT with FILTER to gather the same statistics:
WITH datetable AS (SELECT GENERATE_SERIES(
DATE_TRUNC('week', (SELECT MIN(created_at) FROM org_accounts.deleted_users)),
DATE_TRUNC('week', NOW()),
'1 week'::INTERVAL
)::DATE AS week_start),
all_users AS (SELECT id,
registered_at,
NULL AS deleted_at
FROM org_accounts.users
WHERE status = 'active'
AND org_accounts.__user_is_qa(id) <> 'Y'
AND email NOT LIKE '%@org%'
UNION ALL
SELECT id,
created_at AS registered_at,
deleted_at
FROM org_accounts.deleted_users
WHERE deleter_id = id
AND email NOT LIKE '%@org%'),
weekly_activity AS (SELECT DATE_TRUNC('week', date)::DATE AS week_start,
COUNT(DISTINCT user_id) AS weekly_active_users
FROM (SELECT user_id, date
FROM org_storage_extra.stats_user_daily_counters
WHERE type IN ('created_file', 'created_folder', 'created_secure_fetch')
UNION ALL
SELECT user_id, date
FROM ipfs_pinning_facility.stats_user_daily_counters
WHERE type <> 'shares_viewed_by_others') activity_ids_dates
WHERE EXISTS(SELECT 1 FROM all_users WHERE id = user_id)
GROUP BY week_start),
preprocessed AS (SELECT week_start,
us.actual_users,
us.cumulative_churned_users,
us.weekly_new_users,
us.weekly_churned_users,
COALESCE(weekly_active_users, 0) AS weekly_active_users
FROM datetable dt
CROSS JOIN LATERAL (SELECT
COUNT(DISTINCT u.id) FILTER (WHERE u.registered_at < dt.week_start AND
(u.deleted_at IS NULL OR u.deleted_at > dt.week_start)) AS actual_users,
COUNT(DISTINCT u.id)
FILTER (WHERE u.deleted_at < dt.week_start + '1 week'::INTERVAL) AS cumulative_churned_users,
COUNT(DISTINCT u.id)
FILTER (WHERE u.registered_at >= dt.week_start AND u.registered_at <
dt.week_start +
'1 week'::INTERVAL) AS weekly_new_users,
COUNT(DISTINCT u.id)
FILTER (WHERE u.deleted_at >= dt.week_start AND u.deleted_at <
dt.week_start +
'1 week'::INTERVAL) AS weekly_churned_users
FROM all_users u
WHERE u.registered_at < dt.week_start + '1 week'::INTERVAL
OR (u.deleted_at >= dt.week_start AND
u.deleted_at < dt.week_start + '1 week'::INTERVAL)) us
LEFT JOIN weekly_activity
USING (week_start)
ORDER BY week_start DESC)
SELECT week_start AS for_week_of,
actual_users + cumulative_churned_users AS cumulative_users,
cumulative_churned_users,
cumulative_churned_users::FLOAT /
NULLIF((actual_users + cumulative_churned_users)::FLOAT, 0) AS cumulated_churn_rate,
actual_users,
weekly_new_users,
weekly_churned_users,
weekly_active_users,
weekly_churned_users::FLOAT / NULLIF(actual_users::FLOAT, 0) AS weekly_churn_rate
FROM preprocessed;
There are probably other optimizations possible, but this one was immediately evident.
|
76378322 | 76378801 | I cannot work out how to convert an int to a generic type containing complex128. Here is an example which doesn't compile unless the complex128 is commented out:
package main
import "fmt"
type val interface {
int64 | float64 | complex128
}
func f[V val](a, b V) (c V) {
q := calc()
return a * b * V(q)
}
func calc() int {
// lengthy calculation that returns an int
return 1
}
func main() {
fmt.Printf("%v\n", f(int64(1), int64(2)))
}
This is simplified from a much larger calculation. I've tried using a switch but every syntax I have attempted seems to meet resistance of one kind or another.
How can I multiply a and b with an integer?
I have tried using a switch on the type of the return variable such as any(c).(type) but for example if I have case complex128: then it refuses to allow the complex builtin since it doesn't return a V.
Without the complex128 the above will compile.
| How can I convert an int to a generic type containing complex128 in Go? | This one works but it needs to list every type in the switch statement:
func f[V val](a, b V) (c V) {
q := calc()
var temp any
switch any(c).(type) {
case complex128:
temp = complex(float64(q), 0)
case int64:
temp = int64(q)
default:
temp = float64(q)
}
return a * b * (temp.(V))
}
|
76378721 | 76378829 | enter image description here
in wordpress and woocommerce Plugin
is there anyway to hide "Display Cart" button in wordpress mini card widget ?
i can hide "checkout" button individually but it seems theres no Special Css Class Fot "Display Card" buttun. ?!?!
| Hide 'Display Cart' button in WooCommerce mini cart widget | you can try this
add_action( 'woocommerce_widget_shopping_cart_buttons', 'bbloomer_remove_view_cart_minicart', 1 );
function bbloomer_remove_view_cart_minicart() {
remove_action( 'woocommerce_widget_shopping_cart_buttons', 'woocommerce_widget_shopping_cart_button_view_cart', 10 );
}
OR
.widget .woocommerce-mini-cart__buttons a:not(.checkout) {
display: none;
}
|
76378661 | 76378944 | I'm looking for the fastest way to parse a hex string representing a ulong into a uint keeping as many leading digits as a uint can handle and discarding the rest. For example,
string hex = "0xab54a9a1df8a0edb"; // 12345678991234567899
Should output: uint result = 1234567899;
I can do this by simply parsing the hex into a ulong, getting the digits using ToString and then just taking as many of them as would fit into uint without overflowing but I need something much faster. Thanks. C# code preferred but any would do.
| The fastest way to convert a UInt64 hex string to a UInt32 value preserving as many leading digits as possible, i.e. truncation | For decimal truncation, all the high bits of the hex digit affect the low 9 or 10 decimal digits, so you need to convert the whole thing. Is there an algorithm to convert massive hex string to bytes stream QUICKLY? asm/C/C++ has C++ with SSE intrinsics. I commented there with some possible improvements to that, and to https://github.com/zbjornson/fast-hex . This could be especially good if you're using SIMD to find numeric literals in larger buffers, so you might have the hex string in a SIMD register already. (Not sure if SIMDJSON does that.)
Hex-string to 64-bit integer is something SIMD certainly can speed up, e.g. do something to map each digit to a 0-15 integer, combine pairs of bytes to pack nibbles (e.g. with x86 pmaddubsw), then shuffle those 8-bit chunks to the bottom of a register. (e.g. packuswb or pshufb). x86 at least has efficient SIMD to GP-integer movq rax, xmm0, although the ARM equivalent is slow on some ARM CPUs.
(Getting a speedup from SIMD for ASCII hex -> uint is much easier if your strings are fixed-length, and probably if you don't need to check for invalid characters that aren't hex digits.)
Decimal truncation of u64 (C# ulong) to fit in u32 (C# uint)
Modulo by a power of 10 truncates to some number of decimal digits.
(uint)(x % 10000000000) works for some numbers, but 10000000000 (1e10 = one followed by 10 zeros) is larger than 2^32-1. Consider an input like 0x2540be3ff (9999999999). We'd get (uint)9999999999 producing 1410065407 = 0x540be3ff (keeping the low 32 bits of that 34-bit number.)
So perhaps try modulo 1e10, but if it's too big for u32 then modulo 1e9.
ulong tendigit = x % 10000000000; // 1e10
uint truncated = tendigit <= (ulong)0xffffffff ? tendigit : (x % 1000000000); // % 1e9 keeps 9 decimal digits
If this isn't correct C# syntax or the literals need some decoration to make them ulong (like C 10000000000uLL for good measure), please let me know.
It's probably at least as efficient to just modulo the original number two different ways than to try to get the leading decimal digit of x % 1e10 and subtract it or whatever. The asm is going to need two 64-bit multiplicative inverse constants, and starting from the original number again keeps critical-path latency shorter for out-of-order exec if branch prediction predicts that it needs to calculate the nine-digit truncation.
Binary truncation
@Matthew Whited deleted his answer (due to a bug in the decimal truncation part), but his binary truncation part based on substrings of the original hex input could perhaps be more efficient in some cases than doing the full conversion and then casting to a narrower type or masking with AND.
If you want the last 8 bytes of the hex string
uint.Parse(hex[^8..],NumberStyles.HexNumber)
If you want the first 8 bytes
uint.Parse(hex[2..10], NumberStyles.HexNumber);
|
76384304 | 76384361 | I am faced a problem. I have a project which is in firebase. I have used there firebase Authenticate, Firebase realtime database, Firebase function and some more. Now I have changed my decision. I want to make my own server where I will set up and manage everything.
So that I want to backup my project to move all data to other framework like spring boot project.
In this situation how can I get the whole project? User Auth data, Firebase Realtime database, Firestore etc.
| How to backup a full project of firebase | You'll have to write code or use the CLI to query all of the data you want, and write it to a place you want. Firebase does not provide a tool to do all this automatically for an entire project. You will need to deal with each product's data separately.
You can use the Firebase Admin SDK or the Firebase CLI to access data from the products you listed.
See also:
Is it possible to backup Firebase DB?
https://firebase.google.com/docs/firestore/manage-data/export-import
https://firebase.google.com/docs/cli/auth
|
76378577 | 76378945 | I am trying to build a simple language translating program. I imported the 'language_converter' gem to aid with this goal. I wrote the following code:
require 'language_converter'
class Translator
def initialize
@to = 'ja';
@from = 'en';
end
def translate text
lc(text, @to,@from)
end
end
#puts lc('welcome to Japan!', 'ja','en');
t = Translator.new
p t.translate('welcome to Japan!');
This code results in the error: undefined method 'lc' for #<Translator:0x0000000101167a90 @to="ja", @from="en"> (NoMethodError)
However, when i uncomment the code on line 15, ruby can access the lc method and return some japanese. Does anyone know why the method is 'defined' outside of the class but not inside?
Edit: the language-converter gem is not my own. also, I cannot find the source code on its homepage.
I have also tried adding two semicolons before the lc method like so: ::lc(text, @to,@from). This results in the error: syntax error, unexpected local variable or method, expecting constant
| Why does ruby recognise a method outside of a class, but not inside? | The gem is more than 10 years old and only has one method. And that method is implemented as a class method.
You are properly better off with just rewriting that method in your application with a modern Ruby syntax and proper error handling.
For reference, this it how lib/language_converter.rb in the gem looks like:
require 'net/http'
require 'rubygems'
require "uri"
require 'json'
class UnSupportedLanguage < RuntimeError
def initialize(message='')
@msg = "not supported."
end
end
def self.lc( text, to, from='en' )
begin
uri = URI.parse("http://mymemory.translated.net/api/get")
response = Net::HTTP.post_form(uri, {"q" => text,"langpair"=>"#{from.to_s.downcase}|#{to.to_s.downcase}", "per_page" => "50"})
json_response_body = JSON.parse( response.body )
if json_response_body['responseStatus'] == 200
json_response_body['responseData']['translatedText']
else
puts json_response_body['responseDetails']
raise StandardError, response['responseDetails']
end
rescue UnSupportedLanguage
raise UnSupportedLanguage.new
rescue => err_msg
puts "#{err_msg}"
end
end
|
76384270 | 76384365 | In this example, I want the purple rectangle to change its opacity to 100% regardless of the value of the parent. I tried using all: unset/initial and !important but it doesn't seem to work.
.rect {
width: 500px;
height: 600px;
margin-top: 200px;
margin-left: 300px;
background-color: black;
/* this V */
opacity: 37%;
z-index: -1;
}
.rect1 {
all: unset;
position: absolute;
z-index: 10;
width: 259px;
height: 300px;
margin-top: 500px;
margin-left: 50px;
background-color: purple;
/* to this V */
opacity: 100% !important;
}
<div class="rect">
<div class="rect1"></div>
</div>
| How to override parent's styles in css? | So like Haworth pointed out, using opacity on the element itself brings all children under the influence of the pixelshading used to make the opacity effect.
If you want to get the same effect while retaining your html structure I'd recommend a different approach for the same result using RGBA or hex with an alpha channel on the background-color property directly. See example below.
body {
height: 100%;
width: 100%;
background: url(https://picsum.photos/800) no-repeat;
background-size: cover;
}
.rect {
width: 500px;
height: 600px;
margin-top: 200px;
margin-left: 300px;
background-color: rgba(0,0,0,.37);
/* this V
opacity: 37%;*/
z-index: -1;
}
.rect1 {
position: absolute;
z-index: 10;
width: 259px;
height: 300px;
margin-top: 500px;
margin-left: 50px;
background-color: purple;
/* to this V */
opacity: 100% !important;
}
<div class="rect">
<div class="rect1"></div>
</div>
|
76378347 | 76378974 | I'm running a bat file in windows. I'm trying to generate a log file of all the output that appears in the command prompt, to have as a document.
Note, Not a log file of the contents of the bat file but of the command prompt that it outputs.
How would I do this? Thanks
| How to generate a log file of the windows prompt when I run a bat file | Redirecting to output is done by using > or appending to file using >>
for batch-file, we typically call them.
(call script.cmd)2>&1>"logfile.log"
or append
(call script.cmd)2>&1>>"logfile.log"
Note, 2>&1 2>&1 is redirecting the stderr stream 2 to the stdout stream 1, it is important here, seeing as you said you want to log all of the output results to logfile.
So that should also give the clue that you can in fact redirect success (stdout) results to one file and failures (stderr) to another, i.e
(call script.cmd) 1>"Output.log" 2>"Errors.log"
Note, some commands and executables sends everything to the stdout stream and nothing to stderr, example ping.exe.
|
76384255 | 76384399 | I need to find out the value of "name" inside on the obj object. How can I find it without function invocation?
I wanna use just obj.isActive not obj.isActive()
let obj = {
name: "X Γ A-12 Musk",
isActive: function () {
return this.name.length > 4;
},
};
// and after a while I need to check if is active:
console.log(obj);
// {
// name: 'X Γ A-12 Musk',
// isActive: [Function: isActive] <--------- NOT COOL !
// }
If use an IFEE:
let obj = {
name: "X Γ A-12 Musk",
isActive: (function () {
return this.name.length > 4;
})(),
};
I get:
return this.name.length > 4;
^
TypeError: Cannot read properties of undefined (reading 'length')
| calculate an object property based on the value of another property of the same object | If you do not want to have to call isActive as a function, you can use a getter.
const obj = {
name: "X Γ A-12 Musk",
get isActive () {
return this.name.length > 4;
},
};
console.log(obj.isActive);
|
76384220 | 76384400 | Source Data::
json_data = [{"studentid": 1, "name": "ABC", "subjects": ["Python", "Data Structures"]},
{"studentid": 2, "name": "PQR", "subjects": ["Java", "Operating System"]}]
Hardcoded_Val1 = 10
Hardcoded_Val2 = 20
Hardcoded_Val3 = str(datetime.datetime.now())
Need to create a flat .txt file with the below data.
ID,DEPT,"studentid|name|subjects",execution_dt
10,20,"1|ABC|Python,Data Structures",2023-06-01
10,20,"2|PQR|Java,Operating System",2023-06-01
I am very new in python. Have already tried to figure it out to achieve it but couldn't. Your help will be much appreciated.
import datetime
import pandas as pd
import json
json_data = [{"studentid": 1, "name": "ABC", "subjects": ["Python", "Data Structures"]},
{"studentid": 2, "name": "PQR", "subjects": ["Java", "Operating System"]}]
Hardcoded_Val1 = 10
Hardcoded_Val2 = 20
Hardcoded_Val3 = str(datetime.datetime.now())
profile = str(Hardcoded_Val1) + ',' + str(Hardcoded_Val2) + ',"' + str(json_data) + '",' + Hardcoded_Val3
print(profile)
#data = json.dumps(profile, indent=True)
#print(data)
data_list = []
for data_info in profile:
data_list.append(data_info.replace(", '", '|'))
data_df = pd.DataFrame(data=data_list)
data_df.to_csv(r'E:\DataLake\api_fetched_sample_output.txt', sep='|', index=False, encoding='utf-8')
| Code to format JSON data and append hardcoded data to create a flat .txt file | I would bypass using pandas for this and just build the string manually primarily using a list comprehension and join().
import datetime
import csv
Hardcoded_Val1 = 10
Hardcoded_Val2 = 20
Hardcoded_Val3 = str(datetime.date.today())
json_data = [
{"studentid": 1, "name": "ABC", "subjects": ["Python", "Data Structures"]},
{"studentid": 2, "name": "PQR", "subjects": ["Java", "Operating System"]}
]
csv_data = []
for row in json_data:
keys = "|".join(row.keys())
values = "|".join([
",".join(value) if isinstance(value, list) else str(value)
for value in row.values()
])
csv_data.append(dict([
("ID", Hardcoded_Val1),
("DEPT", Hardcoded_Val2),
(keys, values),
("execution_dt", Hardcoded_Val3)
]))
with open("out.csv", "w", encoding="utf-8", newline="") as file_out:
writer = csv.DictWriter(file_out, fieldnames=list(csv_data[0].keys()))
writer.writeheader()
writer.writerows(csv_data)
This will produce a file with the following contents:
ID,DEPT,studentid|name|subjects,execution_dt
10,20,"1|ABC|Python,Data Structures",2023-06-02
10,20,"2|PQR|Java,Operating System",2023-06-02
|
76380911 | 76381092 | Im making an application multi Language.
I want to build typing as strict and simpel as possible. My Code is the following:
//=== Inside my Hook: ===//
interface ITranslation {
[key:string]:[string, string]
}
const useTranslator = (translations:ITranslation) => {
const language = useLanguage() // just getting the language setting from another hook
const translate = (key:keyof typeof translations) => {
// mapping and returning the right translation
}
return translate;
}
//=== Inside the component: ===//
const translation:ITranlation = {
"something in english": [ "something in german", "something in spanish" ],
"anotherthing in english": ["anotherthing in german", "anotherthing in spanish"]
}
const translate = useTranslation(translation)
return(
<Text>{translate("something in english")}</Text>
)
What i want to achieve:
When passing the translation Object, with Dynamic Keys to the Hook: useTranslation(translations), there should be a typecheck validating, that both languages are provided (any property has an Array with 2 Strings)
When using the translate function (inside the Text component) typescript should bring an error, if a key is not matching the Dynamic Keys inside the translations object. So this should throw an error: tranlate("not a key in object")
But i can't get it to work properly. I can either set the translations object as const, but then there is no typecheck when passing the object to the Hook.
Or i set it as shown above with translation:ITranslation but then there is no typechecking for the parameter in the Β΄translateΒ΄ function inside the component.
Is it possible to achive that? (If yes, how?)
Thanks in advance!
| Expect function Parameter to be Key of Object with Dynamic Properties | This solution will work only for Typescript >= 4.9 since it uses the satisfies operator introduced in the 4.9.
Adding as const is the approach we will go with, and satisfies will allow us to type-check it.
const translation = {
'something in english': ['something in german', 'something in spanish'],
'anotherthing in english': ['anotherthing in german', 'anotherthing in spanish'],
} as const satisfies ITranslation;
Since we added as const the values in the ITranslation will be readonly [string, string], thus we have to update the ITranslation to the following:
interface ITranslation {
[key: string]: readonly [string, string];
}
Next, we need to add a generic parameter to useTranslator so it works over the specific instance of ITranslation. The same goes for the translate function. It should accept the generic parameter for the key of ITranslation and return the value for that specific key:
const useTranslator = <T extends ITranslation>(translations: T) => {
const language = useLanguage(); // just getting the language setting from another hook
const translate = <K extends keyof T>(key: K): T[K][number] => {
// return retrieved value
};
return translate;
};
Since it is not asked in the question translate will return a union of the translations for the specific key, which is achieved by T[K][number]
Usage:
const Component = () => {
const translate = useTranslator(translation);
// "something in german" | "something in spanish"
const case1 = translate('something in english');
// "anotherthing in german" | "anotherthing in spanish"
const case2 = translate( 'anotherthing in english');
return null;
};
playground
|
76381023 | 76381114 | I have added a script for showing a div before different divs in different screen size. This is the code I used:
jQuery(function($){
jQuery(document).ready(function(){
jQuery(window).on('resize', function(){
if(jQuery(window).width() <= 1024){
jQuery( ".checkout.woocommerce-checkout .woocommerce-shipping-fields__wrapper" ).insertBefore( ".checkout.woocommerce-checkout .flux-step.flux-step--2 .flux-checkout__shipping-table" );
}
else if(jQuery(window).width() >= 1025){
jQuery( ".checkout.woocommerce-checkout .woocommerce-shipping-fields__wrapper" ).insertBefore( ".checkout.woocommerce-checkout .flux-checkout__content-right #order_review" );
}
});
});
});
But the code is not working when I open the site. It only works if I resize the screen. May be due to the resize function is used.
Can anyone please guide me how to make it so that it'll show the 2 conditions even without resizing the screen and one'll work above 1024px and another below 1024px.
TIA
| jquery above and below screen sizes | Just put your code in a function and call it on the document ready:
$(function(){
resize();
$(window).on('resize', resize);
function resize(){
$( ".checkout.woocommerce-checkout .woocommerce-shipping-fields__wrapper" )
.insertBefore(
$(window).width() <= 1024 ?
".checkout.woocommerce-checkout .flux-step.flux-step--2 .flux-checkout__shipping-table" :
".checkout.woocommerce-checkout .flux-checkout__content-right #order_review"
);
}
});
|
76378620 | 76378997 | I am trying to compare the QuickCheck library to the SmallCheck one. In SmallCheck I can reach particular value manipulating depth parameter. In QuickCheck:
>a<-generate (replicateM 10000 arbitrary) :: IO [Int]
>length a
10000
>maximum a
30
and my question then is: why are 10,000 "random" ("arbitrary") integers limited by 30?! I expected to see more "widely" distributed values within the range 0..10,000, maybe the maximum value close to 5,000.
| How is arbitrary distributed for Int? Why is it limited by so small values? | The documentation contains a clue:
The size passed to the generator is always 30
By default QuickCheck works by starting with 'easy' or 'small' inputs to see if it can find counterexamples with those. Only if it finds no problems with the small inputs does it gradually widen the range of generated input. The size value (which runs implicitly throughout everything that QuickCheck does) is the value that controls this behaviour.
When you run QuickCheck (e.g. with quickCheck) it automatically increases the size as it goes.
You're not really supposed to use the generate function directly, but if you do, you can resize it:
ghci> b <- generate (replicateM 10000 (resize 60 arbitrary)) :: IO [Int]
ghci> maximum b
60
That said, how are you supposed to use QuickCheck? The documentation describes quickCheck along with a multitude of variations you can use to evaluate properties.
Personally, I integrate my QuickCheck properties with a unit testing framework with testProperty. You can see examples here: Property-based testing is not the same as partition testing.
|
76384387 | 76384457 | I have the following simple function (make) that calls the handle function and is supposed to retry a number of times whenever that function throws. If the retries are exhausted, the make function should throw the error.
const handle = async (): Promise<string> => 'hi';
const make = async (): Promise<string> => {
const MAX_RETRIES = 2;
for (let idx = 0; idx <= MAX_RETRIES; idx++) {
try {
return await handle();
} catch (err) {
if (idx < MAX_RETRIES) {
continue;
} else {
throw err;
}
}
}
};
I'm using TypeScript, which is complaining because the return type doesn't include undefined:
Function lacks ending return statement and return type does not include 'undefined'.
For reference, this is the TS Playground for the code above.
I'm looking for guidance on how to handle the return type for the function.
Note that:
I don't want to change my tsconfigs (currently set to strict)
I don't want to modify the return type to Promise<string | undefined>
My understanding is that the make function can only either return a string (inside the try block) or throw an error once the retries have been exhausted. If that's the case then where does the undefined that TS is asking for comes from? Am I missing something?
| How can I resolve the TypeScript error 'Function lacks ending return statement and return type does not include 'undefined'' in my code? |
My understanding is that the make function can only either return a string (inside the try block) or throw an error once the retries have been exhausted.
I'm fairly sure you're right, but TypeScript can't quite follow logic that complex, so it (incorrectly, I think) sees a path through the function that doesn't do an explicit return and so implicitly returns undefined (wrapped in a promise).
You can solve it in a few ways:
Add a return ""; at the end with a comment noting it'll never happen. (Blech.)
Add a throw new Error("Logic error, this will never be reached."); at the end.
Rewrite the function to make the final attempt more obviously a return-or-throw situation by using < instead of <= and then repeating the return await handle(); at the end. (Not great to have to repeat it, but it's very simple.)
I don't think #1 or #2 need examples, but here's what #3 might look like:
const make = async (): Promise<string> => {
const MAX_RETRIES = 2;
for (let idx = 0; idx < MAX_RETRIES; idx++) {
try {
return await handle();
} catch (err) {
continue; // I guess technically we don't need this, since
// the loop doesn't do anything else
}
}
return await handle();
};
For me, #2 is the winner (or jcalz's rewrite), but any of them will make TypeScript happy, it's really a style choice.
|
76384356 | 76384460 | I am new to promql. So not sure if promql supports my requirement or not.
max_over_time(cbnode_systemstats_cpu_utilization_rate{instance="a",node="a"}[6h])
This above query gives me result of max cpu utilization in past 6 hr for instance a single instnace a.
However I want a query which fetches all metrics for all the instances where instance and node has same value. Something similar to below:
max_over_time(cbnode_systemstats_cpu_utilization_rate{instance = node}[6h])
| How can i get all the metrics where two label have same values using promql? | There is no easy elegant way to do that.
But you can utilize label_replace, logic of label matching for binary operations and a pinch of ingenuity.
label_replace(cbnode_systemstats_cpu_utilization_rate{}, "pseudoid", "$1", "instance", "(.*)")
== label_replace(cbnode_systemstats_cpu_utilization_rate{}, "pseudoid", "$1", "node", "(.*)")
Here we add to LHS metric new label called pseudoid with value of instance, and same for RHS, but with value of node.
Result will be returned only if all labels are the same, and in turn it will mean that instance == pseudoid == node.
Demo of similar query can be seen here.
Notice that since it is not the instant vector selector, you'll need to use subquery syntax to pass it into max_over_time.
You resulting query should look like this:
max_over_time(
(
label_replace(cbnode_systemstats_cpu_utilization_rate{}, "pseudoid", "$1", "instance", "(.*)")
== label_replace(cbnode_systemstats_cpu_utilization_rate{}, "pseudoid", "$1", "node", "(.*)")
)[6h:]
)
|
76381015 | 76381115 | This is ItemManufactureController file
class ItemManufactureController extends Controller
{
public function index(){
return view('item_manufacture');
}
// Save category data into the database
public function store(Request $request){
$newManufacture = new ItemManufacture;
$newManufacture->name = $request->input('txtManufactureName');
$newManufacture->status = $request->input('status', 'available');
dd($newManufacture);
$newManufacture->save();
return redirect('/item_manufacture');
}
}
This is item_manufacture.blade.php file
{{--this page add to layout --}}
@extends('layout.layout_01')
{{--identity the content form the layout--}}
@section('content')
<div class="container">
<div class="row">
<div class="col-md-4"></div>
<div class="col-md-4">
<div class="card">
<h5 class="card-header">Add Item Manufacture Details</h5>
<div class="card-body">
<div class="input-field p-3">
<label for="txtManufactureName">Manufacture Name :</label>
<div class="col-sm-8 p-2">
<input type="text" placeholder="Item Name" name="txtManufactureName" id="txtManufactureName">
</div>
</div>
<div class="input-field p-3">
<div class="col-sm-8 p-2">
</div>
</div>
<a href="/save_manufacture" class="btn btn-primary mb-2" id="btnAdd">ADD</a>
</div>
</div>
</div>
<div class="col-md-4"></div>
</div>
</div>
@endsection
This is route file
//save manufacture
Route::get('/save_manufacture', [ItemManufactureController::class, 'store'])->name('saveManufacture');
Route::get('/item_manufacture', function (){
return view('pages.item_manufacture');
});
This is Model file
class ItemManufacture extends Model
{
use HasFactory;
// public $timestamps=false;
protected $connection = 'mysql';
protected $primaryKey = 'id';
protected $table = 'item_manufacture';
protected $fillable = [
'name',
'status'];
}
when add data into the form and click "ADD" button array comes null value
I used Laravel 8 framework, when I add data into the input field of the item_manufacture form, data will not pass the array. If there any error of my code, Please correct it.
How to save data and get values from the input fields using Laravel framework?
| How to Save data to the Database using Laravel 8? | Please make your route as post since you're storing the data and change your route in chaining name() method as saveManufacture.store
Route::post('/save_manufacture', [ItemManufactureController::class, 'store'])->name('saveManufacture.store');
And in your blade file wrap your inputs inside form tag and set named route in your action.
And then replace a tag (anchor tag) with input type submit since we have added action in our form tag. so your blade file will look this.
{{--this page add to layout --}}
@extends('layout.layout_01')
{{--identity the content form the layout--}}
@section('content')
<div class="container">
<div class="row">
<div class="col-md-4"></div>
<div class="col-md-4">
<div class="card">
<h5 class="card-header">Add Item Manufacture Details</h5>
<div class="card-body">
<form action="{{ route('saveManufacture.store') }}" method="post">
<div class="input-field p-3">
<label for="txtManufactureName">Manufacture Name :</label>
<div class="col-sm-8 p-2">
<input type="text" placeholder="Item Name" name="txtManufactureName" id="txtManufactureName">
</div>
</div>
<div class="input-field p-3">
<div class="col-sm-8 p-2">
</div>
</div>
<input type="submit" class="btn btn-primary mb-2" id="btnAdd" value="ADD">
</form>
</div>
</div>
</div>
<div class="col-md-4"></div>
</div>
</div>
@endsection
Now you'll able to get the request param in your store() function, please try to debug dd($request->post());
|
76378362 | 76378998 | I am working with a chrome extension which uses webpack to build.
To build I use this : cross-env NODE_ENV=production yarn webpack -c webpack.config.js --mode production
webpack.config.js
const HTMLPlugin = require('html-webpack-plugin');
const CopyPlugin = require('copy-webpack-plugin');
const path = require('path');
const UglifyJSPlugin = require('uglifyjs-webpack-plugin');
const BrowserExtensionPlugin = require("extension-build-webpack-plugin");
module.exports = {
entry: {
options: './src/options.tsx',
popup: './src/popup.tsx',
content: './src/content.tsx',
background: './src/background.tsx',
},
output: {
filename: '[name].js',
path: path.resolve(__dirname, 'build'),
},
resolve: {
extensions: ['.js', '.jsx', '.ts', '.tsx', '.css'],
modules: [path.resolve(__dirname, 'src'), 'node_modules'],
alias: {
react: 'preact/compat',
'react-dom': 'preact/compat',
},
},
module: {
rules: [
{
test: /\.(tsx|jsx|ts|js)x?$/,
exclude: /node_modules/,
use: [
{
loader: 'babel-loader',
options: {
presets: [
"@babel/preset-env",
"@babel/preset-react",
"@babel/preset-typescript",
],
},
},
],
},
{
test: /\.svg$/,
use: ['@svgr/webpack'],
},
],
},
plugins: [
new HTMLPlugin({
chunks: ['options'],
filename: 'options.html',
title: 'Options page title',
}),
new HTMLPlugin({
chunks: ['popup'],
filename: 'popup.html',
}),
new CopyPlugin([
{ from: './src/_locales/', to: './_locales' },
{ from: './src/assets', to: './assets' },
{ from: './src/manifest.json', to: './manifest.json' },
]),
new BrowserExtensionPlugin({devMode: false, name: "build/chromium.zip", directory: "src", updateType: "minor"}),
],
optimization: {
minimizer: [
new UglifyJSPlugin({
uglifyOptions: {
compress: {
drop_console: true,
drop_debugger: true,
}
}
})
]
},
mode: 'production',
stats: 'minimal',
performance: {
hints: false,
maxEntrypointSize: 512000,
maxAssetSize: 512000
}
};
manifest.json:
{
"manifest_version": 3,
"name": "__MSG_appName__",
"description": "__MSG_appDesc__",
"default_locale": "en",
"version": "0.1.0",
....
....
}
If I run cross-env NODE_ENV=production yarn webpack -c webpack.config.js --mode production again it increments the version from 0.1.0 to 0.2.0 automatically not just in build folder but in src folder as well. How can I prevent this auto increment functionality.
I suspect it's due to one of the webpack plugins I am using.
| Prevent webpack from auto-incrementing project version | This is caused by extension-build-webpack-plugin which you really shouldn't have struggled to find, as there's a total of 4 plugins there to look at.
No, it does not offer any method of avoiding version bumps. You can only configure if you want it to bump the major or minor version number, defaulting to minor.
It's a really weird library to be using, it gets few downloads and is unmaintained. There's probably better alternatives out there.
|
76384281 | 76384463 | I want to parse some data that's in a string format. Anything enclosed in parenthesis in the string to parse should be replaced with itself run through a function. This is what I want:
function foo(str) {
return parseInt(str) + 1; // Example function, not actually what the function will be
}
function parse(str) {
// everything in str that is enclosed in parenthesis should be replaced with itself ran through foo();
// Example
// Input: "My name is foo and I am (0) year old."
// Output: "My name is foo and I am 1 year old."
// "(0)" has been replaced with the result of foo("0")
}
I have thought up a couple bad workarounds, but I want something more robust. For example:
function parse(str) {
// Input: "My name is foo and I am (0) year old."
str = str.replaceAll("(", "${foo('");
str = str.replaceAll(")", "')}");
str = "`" + str + "`"
// Here str will be "`My name is foo and I am ${foo(0)} year old.`"
// And I can use eval() or something to treat it like I've typed that
}
This, however, is kind of a bad way of doing it.
EDIT: I tested it, it works, but it is quite vulnerable.
I can't think of anything else and I'm not very good with RegEx. (although I'd accept a solution using it)
| Javascript: run specific parts of strings through a function | Here's what I would do. I would match the string with a RegEx that would match anything inside parenthesis in the string. With that, I would then use str.replaceAll() to replace the matched string with the result of the foo() function.
const regex = /\((\d*)\)/gm;
function foo(str) {
return parseInt(str) + 1;
}
function parse(str) {
// Loop all match the regex find in the string
let m;
while ((m = regex.exec(str)) !== null) {
// This is necessary to avoid infinite loops with zero-width matches
if (m.index === regex.lastIndex) {
regex.lastIndex++;
}
// Replace all instance of the match with the operation of the match
str = str.replaceAll(m[0], foo(m[1]))
}
return str;
}
let p = parse('My name is foo and I am (0) year old and I want (54) apples');
// The result will be: My name is foo and I am 1 year old and I want 55 apples
With that, you won't need to use eval() as it potentially pose a risk for your application.
I hope that would work for you. If I missed anything, tell me, I will edit my answer.
|
76381105 | 76381147 | I have a time series every which looks like this :
Time
Volume every minute
2023-05-25T00:00:00Z
284
2023-05-25T00:01:00Z
421
.
.
.
.
2023-05-27T23:58:00Z
894
2023-05-27T23:59:00Z
357
I have to make new CSV by iterating Time column finding unique date and making new columns with corresponding values of volume every minute. For example desired output:
Date
min1
min2
...
min1440
2023-05-25
284
421
...
578
2023-05-26
512
645
...
114
2023-05-27
894
357
...
765
i am able to fetch unique dates but after that i am clueless. please find my sample codes:
import pandas as pd
train_data = pd.read_csv('date25to30.csv')
print(pd.to_datetime(train_data['time']).dt.date.unique())
| Find unique date from existing dataframe and make a new CSV with corresponding column values | First add parameter parse_dates to read_csv for convert Time column to datetimes:
train_data = pd.read_csv('date25to30.csv', parse_dates=['Time'])
Then create minutes by converting HH:MM:SS to timedeltas by to_timedelta and Series.dt.total_seconds, divide 60 and add 1 because python count from 0:
minutes = (pd.to_timedelta(train_data['Time'].dt.strftime('%H:%M:%S'))
.dt.total_seconds()
.div(60)
.astype(int)
.add(1))
Last pass to DataFrame.pivot_table with DataFrame.add_prefix:
df = (train_data.pivot_table(index=train_data['Time'].dt.date,
columns=minutes,
values='Volume',
aggfunc='sum').add_prefix('min'))
print (df)
Time min1 min2 min1439 min1440
Time
2023-05-25 284.0 421.0 NaN NaN
2023-05-27 NaN NaN 894.0 357.0
|
76378633 | 76379006 | I want to hide the AppBar on scroll. The search icon is hidden properly and also the opacity decreases on scroll. But for the title, it is not working.
import 'package:flutter/material.dart';
import 'package:vet_mobile/screens/chat.dart';
import 'package:vet_mobile/screens/logot.dart';
class HomeScreen extends StatelessWidget {
const HomeScreen({Key? key}) : super(key: key);
@override
Widget build(BuildContext context) {
return DefaultTabController(
length: 3,
child: Scaffold(
body: NestedScrollView(
headerSliverBuilder: (BuildContext context, bool innerBoxIsScrolled) {
return <Widget>[
SliverAppBar(
title: Row(
mainAxisAlignment: MainAxisAlignment.spaceBetween,
children: [
Text(
'WhatsApp',
style: TextStyle(
color: Theme.of(context).textTheme.bodyLarge!.color,
),
),
IconButton(
onPressed: () {},
icon: Icon(
Icons.search,
color: Theme.of(context).textTheme.bodyLarge!.color,
),
),
],
),
pinned: true,
floating: true,
elevation: 5,
bottom: TabBar(
indicatorSize: TabBarIndicatorSize.tab,
indicatorWeight: 4,
indicatorColor: Theme.of(context).textTheme.bodyLarge!.color,
labelStyle:
TextStyle(fontSize: 13, fontWeight: FontWeight.w600),
labelColor: Theme.of(context).textTheme.bodyLarge!.color,
unselectedLabelColor:
Theme.of(context).textTheme.bodySmall!.color,
dividerColor: Colors.transparent,
tabs: const [
Tab(text: 'CHATS'),
Tab(text: 'STATUS'),
Tab(text: 'CALLS'),
],
),
),
];
},
body: const TabBarView(
children: [
Center(child: LogoutScreen()),
Center(child: ChatScreen()),
Center(child: Text('Patient')),
],
),
),
),
);
}
}
As we can see the opacity of the search button decreases slowly as I scroll down but not for the title.
I tried using the preferred height, animation controller, but it messed up more.
| Cannot properly hide the appbar title on scroll in flutter | Seems that this effect does not work when you set a custom style. Remove the fixed style setting from here:
Text(
'PawCare',
// remove this
/*style: TextStyle(
color: Theme.of(context).textTheme.bodyLarge!.color,
),*/
),
To set the style of the title text, use the titleTextStyle configuration of SliverAppBar:
SliverAppBar(
titleTextStyle: TextStyle(
color: Theme.of(context).textTheme.bodyLarge!.color),
...
|
76378657 | 76379177 | I have the following algebraic data type:
data Tree a = Empty | Node a (Tree a) (Tree a)
deriving (Show, Eq)
Also, I have
data Step = StepL | StepR
deriving (Show, Eq)
Now, I need a function search that takes
a root of the tree
a target value t
... and it must return a path of type [Step] leading to a node with value t. Also, if t is not present in the tree, search must return Nothing. Finally, the input is guaranteed to have the target value at most once.
My best effort, as of now, is:
searchHelper :: Eq a => a -> Tree a -> [Step] -> Maybe [Step]
searchHelper _ Empty _ = Nothing
searchHelper targetValue (Node nodeValue leftChild rightChild) stepsSoFar =
if targetValue == nodeValue then Just stepsSoFar
else if searchHelper targetValue leftChild (stepsSoFar ++ [StepL]) /= Nothing then searchHelper targetValue leftChild (stepsSoFar ++ [StepL])
else if searchHelper targetValue rightChild (stepsSoFar ++ [StepR]) /= Nothing then searchHelper targetValue rightChild (stepsSoFar ++ [StepR])
else Nothing
search :: Eq a => a -> Tree a -> Maybe [Step]
search targetValue root = searchHelper targetValue root []
As you can see, I call the searchHelper too often (else if searchHelper targetValue leftChild (stepsSoFar ++ [StepL]) /= Nothing then searchHelper targetValue leftChild (stepsSoFar ++ [StepL])). I need a machinery that would allow me to cache the results of searchHelper calls and use them
in if ... then ... else.
Q: How can I do it?
| Haskell: cache result of a function in pattern matching | The use of the word cache confused me, but if I understand the question correctly, the real problem is the repeated use of the same expression. That could certainly become a readability and maintainability issue in a larger code base, so is worthwhile addressing.
From the context this looks like a 'toy problem'. There's nothing wrong with that - I play with plenty of those myself to learn new stuff. The reason I mention it, though, is that from this and other clues I gather that you're still a Haskell beginner. Again: nothing wrong with that, but it just means that I'm going to skip some of the slightly more advanced Haskell stuff.
Checking for Nothing or Just like in the OP is rarely idiomatic Haskell. Instead you'd use pattern-matching or (more commonly) some of the higher-level APIs for working with Maybe (such as Functor, Applicative, Monad, etc.).
That said, I gather that this isn't quite what you need right now. In order to cut down on the duplication of expressions, you can use let..in syntax in Haskell:
searchHelper :: Eq a => a -> Tree a -> [Step] -> Maybe [Step]
searchHelper _ Empty _ = Nothing
searchHelper targetValue (Node nodeValue leftChild rightChild) stepsSoFar =
if targetValue == nodeValue then Just stepsSoFar
else
let l = searchHelper targetValue leftChild (stepsSoFar ++ [StepL])
in if l /= Nothing then l
else
let r = searchHelper targetValue rightChild (stepsSoFar ++ [StepR])
in if r /= Nothing then r
else Nothing
This enables you to 'declare' 'variables' l and r and reuse them.
As my lengthy preamble suggests, this still isn't idiomatic Haskell, but I hope it adresses the immediate question.
|
76383893 | 76384507 | Python OOP problem
MultiKeyDict class, which is almost identical to the dict class. Creating an instance of MultiKeyDict class should be similar to creating an instance of dict class:
multikeydict1 = MultiKeyDict(x=1, y=2, z=3)
multikeydict2 = MultiKeyDict([('x', 1), ('y', 2), ('z', 3)])
print(multikeydict1['x']) # 1
print(multikeydict2['z']) # 3
A feature of the MultiKeyDict class should be the alias() method, which should allow aliases to be given to existing keys. The reference to the created alias should not differ from the reference to the original key, that is, the value has two keys (or more if there are several aliases) when the alias is created:
multikeydict = MultiKeyDict(x=100, y=[10, 20])
multikeydict.alias('x', 'z') # add key 'x' alias 'z'
multikeydict.alias('x', 't') # add alias 't' to key 'x'
print(multikeydict['z']) # 100
multikeydict['t'] += 1
print(multikeydict['x']) # 101
multikeydict.alias('y', 'z') # now 'z' becomes an alias of the key 'y'
multikeydict['z'] += [30]
print(multikeydict['y']) # [10, 20, 30]
The value must remain available by alias even if the original key was removed:
multikeydict = MultiKeyDict(x=100)
multikeydict.alias('x', 'z')
del multikeydict['x']
print(multikeydict['z']) # 100
Keys must take precedence over aliases. If some key and alias are the same, then all operations when accessing them must be performed with the key:
multikeydict = MultiKeyDict(x=100, y=[10, 20])
multikeydict.alias('x', 'y')
print(multikeydict['y']) # [10, 20]
I can't implement such a feature, please give me ideas how it can be done!!!
multikeydict = MultiKeyDict(x=100)
multikeydict.alias('x', 'z')
del multikeydict['x']
print(multikeydict['z']) # 100
my code does not work with this test
multikeydict = MultiKeyDict(x=100)
multikeydict.alias('x', 'z')
del multikeydict['x']
print(multikeydict['z']) #100
class MultiKeyDict(dict):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.aliases = {}
def alias(self, key, alias):
self.aliases[alias] = key
def __getitem__(self, key):
if key in self.aliases:
key = self.aliases[key]
return super().__getitem__(key)
def __setitem__(self, key, value):
if key in self.aliases:
key = self.aliases[key]
super().__setitem__(key, value)
def __delitem__(self, key):
if key in self.aliases:
del self.aliases[key]
super().__delitem__(key)
multikeydict = MultiKeyDict(x=100, y=[10, 20])
multikeydict.alias('x', 'z')
multikeydict.alias('x', 't')
print(multikeydict['z'])
multikeydict['t'] += 1
print(multikeydict['x'])
multikeydict.alias('y', 'z')
multikeydict['z'] += [30]
print(multikeydict['y'])
| Implement MultiKeyDict class in Python with alias() method for creating aliases. Existing code fails when original key is deleted. Need fix | Some remarks:
As the specification says that keys should have precedence over aliases (when both exist), you should first test key membership on self before looking in aliases. Your methods first check for membership in aliases...
As a value must continue to exist when a key is deleted for which there are still alias(es), I would suggest storing the values wrapped in a list (that just has that value as only member). This way you can reference that list in an alias entry. When the key is deleted, the alias will still have the reference to the list and can still act on it.
Here is how that could look:
class MultiKeyDict(dict):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.aliases = {}
# wrap each value in a list of size 1:
for key, value in self.items():
super().__setitem__(key, [value])
def alias(self, key, alias):
self.aliases[alias] = super().__getitem__(key)
def __getitem__(self, key):
if key in self:
return super().__getitem__(key)[0]
return self.aliases[key][0]
def __setitem__(self, key, value):
if key in self:
super().__getitem__(key)[0] = value
elif key in self.aliases:
self.aliases[key][0] = value
else:
super().__setitem__(key, [value])
def __delitem__(self, key):
if key in self:
return super().__delitem__(key)
del self.aliases[key]
|
76381091 | 76381163 | The scenario is the following:
type Option = 'a' | 'b' | 'c' | 'd'
type Question = {
message: string;
options: Option[];
default: Option // here's the issue
}
I want the default prop to be the one of the options used inside question.options. For example:
const q1: Question = {
message: 'first question',
options: ['a', 'b'],
default: 'a'
}
const q2: Question = {
message: 'second question',
options: ['c', 'd'],
default: 'a' // I want this to give an error because 'a' is not in 'c' | 'd'
}
How can I achieve this?
| Narrow down literal unions based on previously used values | It can be done just by using Question; however, it will be a complex type that will cause a horrible time for the compiler since it grows at the speed of power of two, and if you have more options (more than 10), the compiler will reach its limits and won't compile.
Instead, I would suggest adjusting Question to accept the Option[] as a generic parameter and assign the type of the elements of that generic parameter to default:
type Question<T extends Option[]> = {
message: string;
options: T;
default: T[number];
};
Lastly, we will need a generic function that would create a question for us:
const createQuestion = <T extends Option[]>(question: Question<T>) => question;
Usage:
const q1 = createQuestion({
message: "first question",
options: ["a", "b"],
default: "a",
});
const q2 = createQuestion({
message: "second question",
options: ["c", "d"],
default: "a", // Expected error
});
playground
|
76378693 | 76379413 | I want to make my NavigationBar transparent. I have tried extendBody: true on Scafold with surfaceTintColor=Colors.transparent to the NavigationBar widget, but nothing changed.
| How to create a transparent Material 3 NavigationBar in Flutter? | According to the document, SurfaceTintColor is the color of the surface tint overlay applied to the app bar's background color to indicate elevation.
If you want to make the AppBar transparent, just use the property backgroundColor instead.
Scaffold(
extendBody: true,
backgroundColor: Colors.white,
appBar: AppBar(
backgroundColor: Colors.transparent, // To make appBar transparent
/// This is not necessary. You can play around
/// to see surfaceTintColor when the AppBar is transaprent
surfaceTintColor: Colors.redAccent,
elevation: 3,
title: Text(widget.title),
),
),
It is also applied to NavigationBar
bottomNavigationBar: NavigationBar(
surfaceTintColor: Colors.amber, // not neccessary
backgroundColor: Colors.transparent,
destinations: [
Icon(Icons.book, color: Colors.blue,),
Icon(Icons.map, color: Colors.blue,),
],
),
|
76378332 | 76379520 | I am use library(tableone) to make my descriptive statistics for multiple variables
This is my code:
library(tableone)
myVars <- c("class", "age", "Sex", "bmi", "bmi_category",
"drink_freq", "smoke_yn", "edu_dummy")
catVars <- c("class", "Sex", "bmi_category",
"drink_freq", "smoke_yn", "edu_dummy")
tab1_inf <- CreateTableOne(vars = myVars, strata = "NEWDI",
data = TKA_table1, factorVars = catVars)
a1 <- print(tab1_inf, exact = "NEWDI", showAllLevels = TRUE)
This it default for percentage, and I want change it format like this(example):
I checked its description and found no options to set.
https://rdrr.io/cran/tableone/man/print.TableOne.html
How can I do it?
| How to use tableone to change table percentage by row? | With some clever getting-your-hands dirty, you can manipulate the percentages in the TableOne object. This uses an example dataset called pbc from survival package.
library(tableone)
library(survival)
data(pbc)
## Make categorical variables factors
varsToFactor <- c("status","trt","ascites","hepato","spiders","edema","stage")
pbc[varsToFactor] <- lapply(pbc[varsToFactor], factor)
## Create a variable list
vars <- c("time","status","age","sex","ascites","hepato",
"spiders","edema","bili","chol","albumin",
"copper","alk.phos","ast","trig","platelet",
"protime","stage")
## Create Table 1 stratified by trt
tableOne <- CreateTableOne(vars = vars, strata = c("trt"), data = pbc)
tableOne
Before
Stratified by trt
1 2 p test
n 158 154
time (mean (SD)) 2015.62 (1094.12) 1996.86 (1155.93) 0.883
status (%) 0.894
0 83 (52.5) 85 (55.2)
1 10 ( 6.3) 9 ( 5.8)
2 65 (41.1) 60 (39.0)
age (mean (SD)) 51.42 (11.01) 48.58 (9.96) 0.018
sex = f (%) 137 (86.7) 139 (90.3) 0.421
ascites = 1 (%) 14 ( 8.9) 10 ( 6.5) 0.567
hepato = 1 (%) 73 (46.2) 87 (56.5) 0.088
spiders = 1 (%) 45 (28.5) 45 (29.2) 0.985
...
You should try to adapt the following code for your own data format:
for (i in 1:length(table1)) {
sum = tableOne$CatTable[[1]][[i]]$freq + tableOne$CatTable[[2]][[i]]$freq
tableOne$CatTable[[1]][[i]]$percent = tableOne$CatTable[[1]][[i]]$freq / sum
tableOne$CatTable[[2]][[i]]$percent = tableOne$CatTable[[2]][[i]]$freq / sum
}
}
tableOne
After
Stratified by trt
1 2 p test
n 158 154
time (mean (SD)) 2015.62 (1094.12) 1996.86 (1155.93) 0.883
status (%) 0.894
0 83 (0.5) 85 (0.5)
1 10 (0.5) 9 (0.5)
2 65 (0.5) 60 (0.5)
age (mean (SD)) 51.42 (11.01) 48.58 (9.96) 0.018
sex = f (%) 137 (0.5) 139 (0.5) 0.421
ascites = 1 (%) 14 (0.6) 10 (0.4) 0.567
hepato = 1 (%) 73 (0.5) 87 (0.5) 0.088
spiders = 1 (%) 45 (0.5) 45 (0.5) 0.985
|
76384509 | 76384598 | In the code below, we have a dataset that can be read as: "two cooks cook1, cook2 are doing a competition. They have to make four dishes, each time with two given ingredients ingredient1, ingredient2. A jury has scored the dishes and the grades are stored in _score.
I want to use Altair to show a graph where the x-axis is each dish (1, 2, 3, 4) and the y-axis contains the scores of the two cooks separately. This currently works but the main issue is that on hover, the tooltip does not include the score of the current point that is being hovered.
import altair as alt
import pandas as pd
df = pd.DataFrame({
"ingredient1": ["potato", "onion", "carrot", "beet"],
"ingredient2": ["tomato", "pepper", "zucchini", "lettuce"],
"dish": [1, 2, 3, 4],
"cook1": ["cook1 dish1", "cook1 dish2", "cook1 dish3", "cook1 dish4"],
"cook1_score": [0.4, 0.3, 0.7, 0.9],
"cook2": ["cook2 dish1", "cook2 dish2", "cook2 dish3", "cook2 dish4"],
"cook2_score": [0.6, 0.2, 0.5, 0.6],
})
value_vars = [c for c in df.columns if c.endswith("_score")]
cook_names = [c.replace("_score", "") for c in value_vars]
id_vars = ["dish", "ingredient1", "ingredient2",] + cook_names
df_melt = df.melt(id_vars=id_vars, value_vars=value_vars,
var_name="cook", value_name="score")
chart = alt.Chart(df_melt).mark_circle().encode(
x=alt.X("dish:O", title="Dish number"),
y=alt.Y("score:Q", title="Score"),
color="cook:N",
tooltip=id_vars
)
chart.show()
I tried explicitly adding the score columns to the tooltip:
tooltip=id_vars+value_vars
But that yields the following error:
ValueError: cook1_score encoding field is specified without a type; the type cannot be inferred because it does not match any column in the data.
So how can I get altair to also show the score of (only) the currently hovered element?
| Altair: showing the value of the current point in the tooltip | cook1_score is not a column in df_melt, which is why you see the error. Setting tooltip=id_vars+['score'] will work.
|
76384490 | 76384624 | I have created a simple material app in flutter with:
flutter create --platforms=android,windows columntest
When I run the program on Android and Windows, I get some kind of padding between the ElevatedButtons on Android, but not on Windows. Do you know where this comes from and how I can make the design consistent?
The behavior seems to occur only with buttons (TextButton, OutlinedButton, ElevatedButton).
I have also tested this with container (with border), there it does not occur.
Here the code from the small app:
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
home: Scaffold(
body: Center(
child: Column(
crossAxisAlignment: CrossAxisAlignment.center,
mainAxisAlignment: MainAxisAlignment.center,
children: [
ElevatedButton(child: const Text("Foobar1"), onPressed: () {}),
ElevatedButton(child: const Text("Foobar2"), onPressed: () {}),
],
),
),
),
);
}
}
Here is a screenshot at runtime:
Here my flutter version:
$ flutter --version
Flutter 3.10.0 β’ channel stable β’ https://github.com/flutter/flutter.git
Framework β’ revision 84a1e904f4 (3 weeks ago) β’ 2023-05-09 07:41:44 -0700
Engine β’ revision d44b5a94c9
Tools β’ Dart 3.0.0 β’ DevTools 2.23.1
My Android Emulator is an: Pixel_3a_API_33_x86_64
But the behaviour also occurs on my physical Pixel 6 (with android UpsideDownCake)
I look forward to your responses.
best regards
Michael
| Flutter: Inconsistent column padding on Buttons between Android and Windows | So, this implementation is done by flutter.
This is behaviour is because of the ThemeData.materialTapTargetSize parameter for the MaterialApp.
This feature decides what should be touchable dimensions of Material Button, in your case ElevatedButton.
You have 2 potential solutions
Change padding from ElevatedButton like below
ElevatedButton(
onPressed: () {},
style: const ButtonStyle(padding: MaterialStatePropertyAll(EdgeInsets.zero)),
child: const Icon(Icons.abc),
),
Change value from material app
MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
primarySwatch: Colors.blue,
materialTapTargetSize: MaterialTapTargetSize.shrinkWrap),
home: CupertinoPickerExample(),
)
Reference : https://stackoverflow.com/a/67580951
|
76378581 | 76379917 | In my Mojolicious Controller, I have:
my @promise;
foreach my $code (\&doit1, \&doit2,) {
my $prom = Mojo::Promise->new;
Mojo::IOLoop->subprocess(
sub {
my $r = $code->("Hello");
return $r;
},
sub {
my ($subprocess, $err, @res) = @_;
return $prom->reject($err) if $err;
$prom->resolve(@res);
},
);
push @promise, $prom;
}
Mojo::Promise
->all(@promise)
->then(
sub {
my ($result1, $result2) = map {$_->[0]} @_;
});
This works, and I can pass arguments (e.g. Hello) to my sub.
Now I converted doti1() and doit2() as helpers. So the code looks like:
foreach my $code (sub {$self->myhelper->doit1("Goodbye")},
sub {$self->myhelper->doit2("Good night")},
) {
my $prom = Mojo::Promise->new;
Mojo::IOLoop->subprocess(
sub {
my $r = $code->("Hello"); # this is ignored?
return $r;
},
sub {
my ($subprocess, $err, @res) = @_;
return $prom->reject($err) if $err;
$prom->resolve(@res);
},
);
push @promise, $prom;
}
How can I continue to pass the same set of arguments inside the loop (e.g. Hello), without having to specify them in each code ref (i.e. avoid Goodbye & Good night)? I like the idea of passing the same arguments for each code ref: $code->("Hello")
| Perl Mojolicious: Passing arguments to a code ref |
Now I converted doti1() and doit2() as helpers. So the code looks like:
foreach my $code (sub {$self->myhelper->doit1("Goodbye")},
sub {$self->myhelper->doit2("Good night")},
) {
#....
}
Yes but you are calling the helpers from another anonymous sub,
How can I continue to pass the same set of arguments inside the loop (e.g. Hello), without having to specify them in each code ref
so to recover the argument and pass it on the the helper, you just do:
foreach my $code (sub {my $arg = shift; $self->myhelper->doit1($arg)},
sub {my $arg = shift; $self->myhelper->doit2($arg)},
) {...}
or more generally as @Dada pointed out in the comments:
foreach my $code (sub {$self->myhelper->doit1(@_)},
sub {$self->myhelper->doit2(@_)},
) {...}
|
76378589 | 76381943 | I need to retrieve the attributes of a certificate that is stored in the keychain on my Mac from the command line. I can collect them manually from the Keychain Access app, but I want to do that with a script.
I used the security command to get a certificate and "grep" to inspect the "subject" section:
security find-certificate -c "Apple Development" login.keychain | grep "subj"
and then got the following output (some omitted by "...").
"subj"<blob>=0x3081943...553 "0\201\2241\0320\03...02US"
In the output above, what format is the data following "subj"<blob>= and how can I parse it? I found that decoding the first half of the hexadecimal sequence(0x30...) with UTF-8 yields the second half of the string (0\201...), but I don't know what 0\201\2241\... means. I have tried other character codes, but they just give me garbled characters.
| How can I parse the certificate information output from the security command in Mac? | As for the format, the certificates are stored in DER/PEM format, which is a representation of ASN.1 encoded data. What you see in the output is the hexadecimal representation of the ASN.1 binary data. The blob indicates that the value or attribute is stored as binary data.
As for exporting (for certificates), I would highly recommend combining security with openssl as follows:
security find certificate -p -c "Apple Development" login.keychain | openssl x509 -noout -subject
The -p option in the security command exports the found certificate in PEM format, which is something openssl can use. You can then pipe the PEM data into the openssl command, where one can easily extract the subject using the -subject option.
You can check out both the man page of security and the man page of openssl x509.
|
76384082 | 76384628 | I have two variables that I'm trying to model the relationship between and extract the residuals. The relationship between the two variables is clearly a non-linear exponential relationship. I've tried a few different approaches with nls, but I keep getting different error messages.
# dataset
df <- structure(list(y = c(464208.56, 334962.43, 361295.68, 426535.68, 258843.93, 272855.46,
166322.72, 244695.28, 227003.03, 190728.4, 156025.45, 72594.24, 56911.4, 175328.95, 161199.76,
152520.77, 190610.57, 60734.34, 31620.9, 74518.86, 45524.49, 2950.58, 2986.38, 15961.77, 12484.05,
6828.41, 2511.72, 1656.12, 5271.4, 7550.66, 3357.71, 3620.43, 3699.85, 3337.56, 4106.55, 3526.66,
2996.79, 1649.89, 4561.64, 1724.25, 3877.2, 4426.69, 8557.61, 6021.61, 6074.17, 4072.77, 4032.95,
5280.16, 7127.22),
x = c(39.23, 38.89, 38.63, 38.44, 38.32, 38.27, 38.3, 38.4, 38.56, 38.79, 39.06, 39.36, 39.68,
40.01, 40.34, 40.68, 41.05, 41.46, 41.93, 42.48, 43.14, 43.92, 44.84, 45.9, 47.1, 48.4, 49.78,
51.2, 52.62, 54.01, 55.31, 56.52, 57.6, 58.54, 59.33, 59.98, 60.46, 60.78, 60.94, 60.92, 60.71,
60.3, 59.69, 58.87, 57.86, 56.67, 55.33, 53.87, 52.33)),
row.names = c(NA, -49L),
class = c("tbl_df", "tbl", "data.frame"),
na.action = structure(c(`1` = 1L, `51` = 51L),
class = "omit"))
# initial model
m <- nls(y ~ a * exp(r * x),
start = list(a = 0.5, r = -0.2),
data = df)
Error in nls(y ~ a * exp(r * x), start = list(a = 0.5, r = -0.2), data = df, : singular gradient
# add term for alg
m <- nls(y ~ a * exp(r * x),
start = list(a = 0.5, r = -0.2),
data = df,
alg = "plinear")
Error in nls(y ~ a * exp(r * x), start = list(a = 0.5, r = -0.2), data = df, :
step factor 0.000488281 reduced below 'minFactor' of 0.000976562
| error messages fitting a non-linear exponential model between two variables | log-Gaussian GLM
As @Gregor Thomas suggests you could linearize your problem (fit a log-linear regression), at the cost of changing the error model. (Basic model diagnostics, i.e. a scale-location plot, suggest that this would be a much better statistical model!) However, you can do this efficiently without changing the error structure by fitting a log-link Gaussian GLM:
m1 <- glm(y ~ x, family = gaussian(link = "log"), data = df)
The model is y ~ Normal(exp(b0 + b1*x), s), so a = exp(b0), r = b1.
I tried using list(a=exp(coef(m1)[1]), r=coef(m1)[2]) as starting values, but even this was too finicky for nls().
There are two ways to get nls to work.
shifted exponential
As @GregorThomas suggests, shifting the x-axis to x=38 also works fine (given a sensible starting value):
m <- nls(y ~ a * exp(r * (x-38)),
start = list(a = 3e5, r = -0.35),
data = df)
provide nls with a gradient
The deriv function will generate a function with the right structure for nls (returns the objective function, with a ".grad" attribute giving a vector of derivatives) if you ask it nicely. (I'm also using the exponentiated intercept from the log-Gaussian GLM as a starting value ...)
f <- deriv( ~ a*exp(r*x), c("a", "r"), function.arg = c("x", "a", "r"))
m2 <- nls(y ~ f(x, a, r),
start = list(a = exp(coef(m1)[1]), r = -0.35),
data = df)
We can plot these to compare the predictions (visually identical):
par(las = 1, bty = "l")
xvec <- seq(38, 60, length = 101)
plot(y ~ x, df)
lines(xvec, predict(m1, newdata = data.frame(x=xvec), type = "response"),
col = 2)
lines(xvec, predict(m, newdata = data.frame(x=xvec)), col = 4, lty = 2)
lines(xvec, predict(m2, newdata = data.frame(x=xvec)), col = 5, lty = 2)
With a little bit of extra work (exponentiating the intercept for the Gaussian GLM, shifting the x-origin back to zero for the nls fit) we can compare the coefficients (only equal up to a tolerance of 2e-4 but that should be good enough, right?)
a1 <- exp(coef(m1)[[1]])
a2 <- coef(m)[[1]]*exp(-38*coef(m)[[2]])
all.equal(c(a = a1, r = coef(m)[[2]]),
c(a = a2, r = coef(m1)[[2]]), tolerance = 1e-4)
all.equal(c(a = a1, r = coef(m)[[2]]),
coef(m2), tolerance = 2e-4)
|
76382271 | 76382378 | I'm trying to insert the data inside a forall loop. For this case, I cannot use a temporary variable and set result of the function beforehand.
The function just maps a number to a string:
create or replace function GetInvoiceStatus(status number)
return nvarchar2
as
begin
case status
when 0 then return 'New';
when 200 then return 'Sent';
when 300 then return 'Accepted';
end case;
return '';
end;
when I call this function like:
select GetInvoiceStatus(200) from dual;
I get the appropriate result.
However, when I try to insert the data I get errors.
The forall insert:
forall i in 1.. INVOICE_DATA.COUNT
insert into "InvoiceAudit"
("PropertyName", "OldValue", "NewValue" (
VALUES ('Status', (GetInvoiceStatus(invoice_data(i).status)),
((GetInvoiceStatus((select "Status" from "Invoice" where "InvoiceId" = invoice_data(i).invoiceId)))));
However, I get the following error:
[2023-06-01 15:02:57] [65000][6592] [2023-06-01 15:02:57] ORA-06592:
CASE not found while executing CASE statement [2023-06-01 15:02:57]
ORA-06512: at "PUBLIC.GETINVOICESTATUS", line 9 [2023-06-01 15:02:57]
ORA-06512: at "PUBLIC.INVOICESSP", line 63 [2023-06-01 15:02:57]
Position: 5
I have double checked, and the results from invoice_data(i).Status and the other select value are both valid parameters (and have their cases covered) and return appropriate string when called outside the stored procedure.
Is the syntax somewhere wrong?
I would like to remain using forall if at all possible because it is much faster than a regular for loop.
| Function call as a parameter inside insert values statement | This error means that the parameter value (status) is not one of the cases in the case expression (which are 0, 200, 300).
If you executed this code select GetInvoiceStatus(555) as dd from dual you will get the same error. So, add ELSE clause like this:
create or replace function GetInvoiceStatus(status number)
return nvarchar2
as
begin
case status
when 0 then return 'New';
when 200 then return 'Sent';
when 300 then return 'Accepted';
else return '';
end case;
end;
|
76384531 | 76384635 | I have a spreadsheet where I have an importrange and vlookup to another file where its looking up to a pivot table. Some data is blank in the pivot table and when I lookup in the formula, I have a result of blank even though I have set it to return to 0 by iferror.
Here's my formula:
=iferror(VLOOKUP(A5,importrange("12PaJfEC7Q7gOcCx2zlMHG3YybQuk1TSsNjZDw26qFRg","Converted Pivot!A:E"),3,false),0)
| pivot returning blank instead of 0 google sheet | You may try:
=let(Ξ£,ifna(vlookup(A5,importrange("12PaJfEC7Q7gOcCx2zlMHG3YybQuk1TSsNjZDw26qFRg","Converted Pivot!A:E"),3,),"no_match_found"),
if(Ξ£="",0,Ξ£))
blank_value will now be shown as 0 & a non-match output error will be prompted with no_match_found
|
76380577 | 76381169 | I am trying to make a layout with:
A header (gray block in the snippet)
A body (lime borrder)
Main body content ( blocks with red border)
If you scroll horizontally, then the header should not scroll, it should be full width and stay in view. If you scroll vertically, then the header should scroll off the page as usual. The height of the header is dynamic, and fits the content within it (this SO answer works with a fixed height)..
The <main> element is allowed to be wider than the viewport, but the header is always the viewport width.
The reason I dont add max-width: 100%; overflow-x: auto on the <main> element (like this SO answer, is because then the horizontal scroll appears at the bottom of the element, and then say one is reading the first block, and you wish to scroll horizontally, you have to scroll to the bottom of the main element to see the horizontal scroll bar, scroll to the side, then scroll back up. I wish to have the horizontal scroll bar always present if main is wider than the view port.
I have tried position: sticky/fixed on the header but could not get it to work.
I would prefer not to use JavaScript if possible.
header {
padding: 32px;
background: gray;
width: 100%;
}
main {
border: 2px solid lime;
min-width: 100%;
}
div {
height: 200px;
width: 120%; /* make it overflow horizontally */
display: flex;
align-items: center;
justify-content: center;
border: 2px solid red;
}
<header>The Header should not scroll horizntally<br>(is dynamic height)</header>
<main>
<div>content 1</div>
<div>content 2</div>
<div>content 3</div>
<div>content 4</div>
<div>content 5</div>
<div>content 6</div>
</main>
| Make an element not scroll horizontally | What I have done here is make header sticky to the left part of the screen. Its parent element must be aware of size of your content to allow header to move. So I set body min-width to min-content and same with main so it can transfer its children's size to body.
You also may notice I used box-sizing: border-box; in the header, its so padding size is taken into account when element size is calculated(100vw in this case). You donΒ΄t want to use % on header width because it wonΒ΄t have room to slide.
Also div sizes must not be dependent on parent size, so you canΒ΄t use % here either.
body{
min-width: min-content;
}
header {
box-sizing: border-box;
position: sticky;
left: 0;
padding: 32px;
background: gray;
width: 100vw;
}
main {
min-width: min-content;
border: 2px solid lime;
}
div {
height: 200px;
width: 120vw; /* make it overflow horizontally */
display: flex;
align-items: center;
justify-content: center;
border: 2px solid red;
}
<body>
<header>The Header should not scroll horizntally<br>(is dynamic height)</header>
<main>
<div>content 1</div>
<div>content 2</div>
<div>content 3</div>
<div>content 4</div>
<div>content 5</div>
<div>content 6</div>
</main>
</body>
|
76382239 | 76382400 | I'm trying to create a small web application using Svelte.
One of the requirements is to be able to change the application "theme" on demand, for example - dark theme, light theme, high contrast, and so on.
I've been using an online mixin snippet to help me with that -
https://medium.com/@dmitriy.borodiy/easy-color-theming-with-scss-bc38fd5734d1
However, this doesn't work consistently, and I often get errors like:
[vite-plugin-svelte] /path/to/svelte/component.svelte:61:0 Unused CSS selector "main.default-theme div.some.element.identification"
even tho the selector is used and is receiving it's non-themed attributes.
Inside a themes.scss file:
@mixin themify($themes) {
@each $theme,
$map in $themes {
main.#{$theme}-theme & {
$theme-map: () !global;
@each $key,
$submap in $map {
$value: map-get(map-get($themes, $theme), '#{$key}');
$theme-map: map-merge($theme-map, ($key: $value)) !global;
}
@content;
$theme-map: null !global;
}
}
}
@function themed($key) {
@return map-get($theme-map, $key);
}
$themes: (
default: (
strokeColor: green,
fillColor: red,
),
);
and inside another scss file that is importing themes.scss:
div.some.element.identification {
some-non-themed-attribute: some-value;
@include themify($themes) {
stroke: themed('strokeColor');
fill: themed('fillColor');
}
}
now the punchline - when using this methodology, some elements are receiving their appropriate themed attributes, and others dont.
I am also seeing the following error:
[vite-plugin-svelte] /path/to/svelte/component.svelte:61:0 Unused CSS selector "main.default-theme div.some.element.identification"
the issue doesn't seem to be in the css selectors - since the elements that dont receive the themed attributes, still receive the other non-themed attributes in the same css clause.
Two final observations -
When I'm building the project (using vite build), I can see that the css asset file being created doesn't include the css selectors that are missing their themed attributes.
When i'm using the devtools to locate the supposedly unused selectors (whose themed attributes are not present), they can be found - despite the error message.
I've been trying different way to solve this issue and nothing works consistently.
Thank you in advance for your help!
| "Unused CSS selector" when using a SASS themify mixin with Svelte and Vite: | You could try checking these different items:
If you use svelte-preprocess, try to add scss: { prependData: `@import 'src/styles/theme.scss';` } or whatever the path to your theme is, to the config object.
If it still does not work, maybe try to swap svelte-preprocess with vite-preprocess
Disable any potential css purge plugin
|
76384567 | 76384661 | I learned 2 ways of inserting elements into a vector.
And I've been wondering which way is faster since I'm working with time limits.
Method 1:
int n;
cin>>n;
vector<int> v(n);
for(int i = 0;i<n;i++){
cin>>v[i];
}
Method 2:
int n;
cin>>n;
vector<int> v;
for(int i = 0;i<n;i++){
int x;
cin>>x;
v.push_back(x);
}
If you have a better method to suggest, it'd be appreciated!
| Is it faster to use push_back(x) or using an index (capacity)? | Both have issues:
You should be using reserve(n)
int n;
cin >> n;
vector<int> v;
v.reserve(n);
for(int i = 0; i < n; ++i){
int x;
cin >> x;
v.emplace_back(x);
}
In the first version: Setting size.
Here you have the issue that you are constructing all the elements in the array. Now for integers this may be insignificant. But if we extend this to non integer types that have a constructor that needs to be called for each element and then you are using the assignment operator to copy over them.
The second option: push_back
Here you run into the risk of the underlying storage being reallocated (potentially multiple times). Each time you re-allocate you need to copy the data from the old storage to the new storage.
Again this hurts for integers but really hurts for types with constructors and destructors.
Prefer: emplace_back()
Rather than pushing where you need a fully constructed object. You can use emplace_back and pass in the objects used to construct the object. This allows the vector to construct the object in place. If you have simple integers or classes with effecient move semantics then not an issue but worth it as a general habit.
|
76382402 | 76382476 | I am trying to set up a gif as a background,I get does it not work:
In the code Import GridMatrix and extract the src from it, then I use the video tag to try to render it on fullscreen.
import React from 'react';
import GridMatrix from '../assets/gridMatrix.gif';
function Home() {
return (
<div>
<video
className="matrix-bg fixed top-0 left-0 w-full h-full z-[-1] object-cover"
autoPlay
loop
muted
>
<source
src={GridMatrix.src}
type="video/gif"
/>
</video>
<main className="container mx-auto py-10 px-4 flex flex-col items-center justify-center">
<h1 className="text-4xl font-bold mb-8 text-white text-center">
UNS Demo
</h1>
<button className="bg-blue-500 hover:bg-blue-600 text-white font-bold py-2 px-4 rounded">
Login
</button>
</main>
</div>
);
}
export default Home;
| Background video in Node.js 13 | GIF files are not video files and the MIME type for them is image/gif. The <video> tag will not render them.
You can embed images using <img src={GridMatrix.src} /> or set it as a background-image with CSS on an element.
Nowadays, websites often embed what they call 'GIFs' that are actually video files, but notice that those are often .webm or .mp4 files, both being video formats, thus compatible with <video>.
|
76380888 | 76381205 | I have the following xml file with the below structure to convert to csv using Azure function C#. The XML file is located in Azure Data Lake location. The structure of the file is as follows.
<root id="1" created_date="01/01/2023" asof_date="01/01/2023">
<level1>
<data1>sdfs</data1>
<data2>true</data2>
<level2 rec="4">
<level_record>
<groupid>1</groupid>
<groupname>somegroup</groupname>
<groupdate>01/01/2023</groudate>
<groupvalue>5</groupvalue>
<groupkey>ag55</groupkey>
</level_record>
<level_record>
<groupid>2</groupid>
<groupname>somegroup1</groupname>
<groupdate>02/01/2023</groudate>
<groupvalue>6</groupvalue>
<groupkey>ag56</groupkey>
</level_record>
</level2>
</level1>
</root>
How do i read the file from Azure data lake and convert it as a csv file?
| Convert XML File with nested hierarchy placed in Azure Data lake to CSV using C# Azure Function | Here is the example of Azure Function in C# that reads an XML file from Azure Data Lake Storage and converts it to a CSV file
using Microsoft.Azure.Functions.Worker;
using Microsoft.Extensions.Logging;
using Microsoft.Azure.Storage;
using Microsoft.Azure.Storage.Auth;
using Microsoft.Azure.Storage.Blob;
using System.IO;
using System.Xml.Linq;
namespace YourNamespace
{
public static class ConvertXmlToCsvFunction
{
[Function("ConvertXmlToCsvFunction")]
public static void Run([BlobTrigger("your-container/{name}", Connection = "AzureWebJobsStorage")] Stream xmlStream, string name, FunctionContext context)
{
var logger = context.GetLogger("ConvertXmlToCsvFunction");
logger.LogInformation($"Processing file: {name}");
try
{
// Read the XML file content
string xmlContent;
using (StreamReader reader = new StreamReader(xmlStream))
{
xmlContent = reader.ReadToEnd();
}
// Parse the XML content
XDocument xDoc = XDocument.Parse(xmlContent);
// Extract data and convert to CSV format
XElement rootElement = xDoc.Element("root");
XElement level1Element = rootElement.Element("level1");
XElement level2Element = level1Element.Element("level2");
// Create the CSV header
string csv = "groupid,groupname,groupdate,groupvalue,groupkey" + "\n";
// Iterate over level_record elements and extract data
foreach (XElement recordElement in level2Element.Elements("level_record"))
{
string groupid = recordElement.Element("groupid").Value;
string groupname = recordElement.Element("groupname").Value;
string groupdate = recordElement.Element("groupdate").Value;
string groupvalue = recordElement.Element("groupvalue").Value;
string groupkey = recordElement.Element("groupkey").Value;
// Append the CSV row
csv += $"{groupid},{groupname},{groupdate},{groupvalue},{groupkey}" + "\n";
}
// Save the CSV content to a file
string csvFileName = Path.ChangeExtension(name, "csv");
string csvFilePath = Path.Combine(Path.GetTempPath(), csvFileName);
File.WriteAllText(csvFilePath, csv);
logger.LogInformation($"CSV file created: {csvFilePath}");
}
catch (Exception ex)
{
logger.LogError($"An error occurred: {ex.Message}");
throw;
}
}
}
}
|
76380899 | 76381218 | I have tried using the validation-on, rules props, they are able to validate and give me error messages but the new items are still getting appended to the state. Is there any way to change this behaviour so that every time there is a validation error it don't append the item to the state?
ParentComponent.vue
...
<MultiSelect
v-model="form.tags"
label="Select Tags"
:items="tags"
item-title="name"
item-value="id"
/>
...
MultiselectComponent.vue
<template>
<v-combobox
multiple
chips
closable-chips
clearable
:return-object="false"
variant="outlined"
/>
</template>
What I want
Basically I don't want user to add tags that starts with a number or are all numbers
e.g. 123, 2VueJs, 456890, 68yjkk etc.
| How can we stop vuetify 3 v-combobox from adding new items if the validation checks fail? | Validation is used to show error messages to the user and prevent the form from being submitted. But if you remove invalid values immediately, there is no error message and the values are always going to be valid.
So instead of validation, you can just filter the values coming out of the component. Just replace the v-model with the underlying :modelValue and @update:modelValue and pipe the values through a filter:
<v-combobox
:model-value="values"
@update:model-value="values = filterInvalid($event)"
...
/>
You can also use the filter on the input of :modelValue to filter any invalid values coming in, depending on if there are preset values and how to deal with them if they are invalid.
Here it is in a snippet:
const { createApp, ref } = Vue;
const { createVuetify } = Vuetify
const vuetify = createVuetify()
const app = {
setup(){
return {
values: ref([12, '12n','n']),
filterInvalid: (inputValues) => inputValues.filter(value => typeof value === 'string' && isNaN(value[0]))
}
}
}
createApp(app).use(vuetify).mount('#app')
<link rel="stylesheet" type="text/css" href="https://cdn.jsdelivr.net/npm/vuetify@3/dist/vuetify.min.css" />
<link href="https://cdn.jsdelivr.net/npm/@mdi/font@5.x/css/materialdesignicons.min.css" rel="stylesheet">
<div id="app">
<v-app>
<v-main class="pa-8">
<v-combobox
:model-value="filterInvalid(values)"
@update:model-value="values = filterInvalid($event)"
multiple
chips
closable-chips
clearable
variant="outlined"
></v-combobox>
<div>Values: {{values}}</div>
</v-main>
</v-app>
</div>
<script src="https://unpkg.com/vue@3/dist/vue.global.prod.js"></script>
<script src="https://cdn.jsdelivr.net/npm/vuetify@3/dist/vuetify.min.js"></script>
Note however that removing values automatically can feel like a bug to users. You might be better off with the validation approach after all.
|
76382472 | 76382541 | I want to convert this data using jolt, I tried but it showing null as result
Input :
{
"employer": [
{
"id": "98",
"place_id": "7871",
"name": "Iti-ha-cho"
}
]
}
Expected Output :
{
"id" : "98",
"place_id" : "7871",
"name" : "Iti-ha-cho"
}
Jolt Spec I tried but didnt work :
{
"operation": "shift",
"spec": {
"employer": {
"id": "[&1].&",
"place_id": "place_id",
"name": "name"
}
}
}
| I tried Simple Jolt transformation but its not working | While the current issue is due to missing square bracket nesting the spec, you don't need to specify those long stuff, just the following spec will suffice
[
{
"operation": "shift",
"spec": {
"employer": {
"*": ""
}
}
}
]
as you only want to extract the sub-content of the employer array.
|
76382467 | 76382555 | I think its's pretty clear in my code what I am trying to do. basically I'm trying to use the max and min parameter from the input to make it so that they can never cross each other. This doesn't work of course, I am using react and am using usestate to set the values whenever the form is submitted and pass these variables into my database fetch. I feel like using 2 states one for the temporary value of the input and one to pass the submitted value to the fetch is not a good way of solving this.
const [ LowestPrice, setLowestPrice ] = useState(0)
const [ HighestPrice, setHighestPrice ] = useState(500)
useEffect(() =>{
const getProps = async () => {
const { data, count, error } = await backbase.from('products_2')
.select('*', { count: 'exact' })
.gte('price', LowestPrice)
.lt('price', HighestPrice)
.range(indexOfFirstItem, indexOfLastItem - 1)
}}, [LowestPrice, HighestPrice])
const handleSubmit = (e) => {
e.preventDefault()
setLowestPrice(document.getElementById("lowest_price")?.value)
setHighestPrice(document.getElementById("highest_price")?.value)
}
<form onSubmit={handleSubmit}>
<label htmlFor="lowest_price">minimum price</label>
<input
type="number"
id="lowest_price"
defaultValue={LowestPrice}
min={0}
max={document.getElementById("highest_price")?.value}
/>
<label htmlFor="highest_price">maximum price</label>
<input
type="number"
id="highest_price"
defaultValue={HighestPrice}
min={document.getElementById("lowest_price")?.value}
max={500}
/>
<button type="submit">apply filters</button>
</form>
I left out unessential parts of the code to make it easier to read. It's the min and max in the form that are the most relevant.
| form with 2 int inputs where one always has to be lower and one always has to higher | Firstly, don't use getElementById to listen to changes. React triggers a re-render on components when their value changes. In order to retain the value, we use useState. Secondly, you can use OnChange to compare between both values in state before deciding whether or not to discard the new value. Try something like this;
const [lowestPrice, setLowestPrice] = useState(0);
const [highestPrice, setHighestPrice] = useState(1);
return (
<form onSubmit={handleSubmit}>
<label htmlFor="lowest_price">minimum price</label>
<input
onChange={e => e.target.value <= highestPrice && setLowestPrice(e.target.value)} value={lowestPrice}
type="number"
id="lowest_price"
defaultValue={lowestPrice}
value={lowestPrice}
min={0}
max={highestPrice}
/>
<label htmlFor="highest_price">maximum price</label>
<input
onChange={e => e.target.value > lowestPrice && setHighestPrice(e.target.value)}
type="number"
id="highest_price"
defaultValue={highestPrice}
value={highestPrice}
min={lowestPrice}
max={500}
/>
<button type="submit">apply filters</button>
</form>
)
|
76384489 | 76384691 | This is the parent table named route
id
start_day
end_day
1
2023/05/01
2023/05/07
2
2023/05/01
2023/05/07
3
2023/05/01
2023/05/07
4
2023/05/01
2023/05/07
5
2023/05/01
2023/05/07
Child table named route_detail
id
route_id
visit_status
point_of_delivery_plant_name
point_of_delivery_plant_number
1
1
5
CROP SOLUTIONS S.A.
563
2
1
5
CROP SOLUTIONS S.A.
563
3
1
5
CROP SOLUTIONS S.A.
563
4
2
0
SAMA S.A.
781
5
3
0
WALTER SAMA HARMS
732
6
4
5
AGROSER S.A.
242
7
4
5
AGROSER S.A.
242
8
5
5
AGROFERTIL S.A
287
9
5
5
AGROFERTIL S.A
287
10
5
5
AGROFERTIL S.A
287
and a third child table named event, for each record route_detail there is 1 event. This is child to route_detail
id
route_detail_id
event_type
event_description
50
1
1
start visit
51
2
2
recurrent form
52
3
3
end visit
53
4
1
start visit
54
5
1
start visit
55
6
1
start visit
56
7
2
recurrent form
57
8
1
start visit
58
9
2
recurrent form
59
10
4
harvest advance
What I'm trying to do is to get all the routes with visit_status = 5 and that don't have events with event_type = 3(end visit)
But I can't manage to get that result
I tried something like this after some research but the queries would still return routes with route_details with the event_type = 3 on them
SELECT r.id,
r.start_day,
r.end_day,
de.point_of_delivery_plant_name,
de.point_of_delivery_plant_number,
de.visit_status
FROM route r
JOIN route_detail de ON de.route_id = r.id
WHERE NOT EXISTS (SELECT 1
FROM route ro
JOIN route_detail rd ON rd.route_id = ro.id
JOIN event ev ON ev.route_detail_id = rd.id
WHERE rd.route_id = r.id
AND ev.event_type_id !=7
AND rd.visit_status = '5'
AND rd.id = de.id)
AND de.visit_status = '5'
GROUP BY 1,2,3,4,5,6
ORDER BY r.id DESC;
This is how my results should look like, since only routes 4 and 5 have visit_status = '5' and their route_details don't have event_type =3
Note: I didn't make the tables
id
start_day
end_day
4
2023/05/01
2023/05/07
5
2023/05/01
2023/05/07
| Postgresql Need a query that gives me all the parents that don't have child with a specific status value | If you want to do it with the EXISTS expression, you can use:
one EXISTS to check the existence of route_detail.visit_status = 5
one EXISTS to check the non-existence of event.event_type = 3 when route_detail.visit_status = 5
SELECT r.*
FROM route r
WHERE EXISTS(SELECT 1
FROM route_detail rd
WHERE r.id = rd.route_id
AND rd.visit_status = 5 )
AND NOT EXISTS(SELECT 1
FROM route_detail rd
INNER JOIN "event" e
ON rd.id = e.route_detail_id
WHERE r.id = rd.route_id
AND e.event_type = 3)
Output:
id
start_day
end_day
4
2023-05-01T00:00:00.000Z
2023-05-07T00:00:00.000Z
5
2023-05-01T00:00:00.000Z
2023-05-07T00:00:00.000Z
Check the demo here.
|
76381164 | 76381242 | I wanna sum cells that have the same color. I know there are some VBA functions to do that. But my problem is kinda specific. I want to sum cells values from a single column, based on cells colors on another column.
I add an example and the code I used. I got the "#VALUE" error on the line where I try to access the Interior property.
Function SumByColor(CellColor As Range, rRange As Range)
Dim cSum As Double
Dim ColIndex As Integer
Dim compatedCell As Range
Debug.Print ("sumbycolor called")
ColIndex = CellColor.Interior.ColorIndex
For Each cl In rRange
comparedCell = Worksheets("HA").Cells(cl.Row, 1)
Debug.Print (comparedCell.Interior.ColorIndex) #nothing printed
If comparedCell.Interior.ColorIndex = ColIndex Then
cSum = WorksheetFunction.Sum(cl, cSum)
End If
Next cl
SumByColor = cSum
End Function
Thx for your help.
| Sum cells by colors based on other colum cells | You should dim all your variables.
Dim cl As Range, comparedCell As Range
For Each cl In rRange
Set comparedCell = Worksheets("HA").Cells(cl.Row, 1)
Debug.Print (comparedCell.Interior.ColorIndex) 'nothing printed
If comparedCell.Interior.ColorIndex = ColIndex Then
cSum = WorksheetFunction.Sum(cl, cSum)
End If
Next cl
As comparedCell is a Range-object you have to use Set.
|
76382489 | 76382575 | Suppose I have two classes, no_copy and no_move which are base classes. From which any class can derive from and hence have their constructors and operators modified.
As the name suggests, no_copy will literally just do the following; (commented out what it should do)
class base : no_copy
{
/*
base(const base&)=delete;
base& operator=(const base&)=delete;
*/
};
And the same for no_move,
class base : no_move
{
/*
base(base&&)=delete;
base& operator=(base&&)=delete;
*/
};
These classes (no_copy & no_move) make it so that the class that derives from it should not be copyable or moveable.
Right now I am just using a macro to do this;
// For classes without a qualified name
#define __NO_COPY__(__class__) __class__::__class__(const __class__&)=delete; __class__& __class__::operator=(const __class__&)=delete;
// For classes with qualified names
#define __NO_COPY_LIB__(__qualified_name__,__class__) __qualified_name__::__class__(const __class__&)=delete; __class__& __qualified_name__::operator=(const __class__&)=delete;
And the same technique for not moving.
EDIT:
Examples of the macros;
suppose a class as such;
namespace wrapper
{
class my_class {};
__NO_COPY__(my_class);
}
__NO_COPY_LIB__(::wrapper::my_class,my_class);
The macros work just fine, but it doesn't look pretty. Moreover they are not easy to refactor as these macros can be anywhere in the definition or implementation.
A syntax like the following would mean that all of this in right there in the definition of the class and can be changed or modified very easily;
class base : public attributes<no_copy,no_move> {};
The attributes struct is just defined as;
template <typename... _BCs/*Base Classes*/>
struct attibutes : public _BCs... {};
Now obviously the syntax of inheriting doesn't matter that much. All I want it to be able to define where the class can be copied,moved or not. Also, if there is a method other than inheritance which could work please do suggest it.
As all I am looking for is the method that is easily refactorable.
I have tried finding solutions online about how to overload constructors. But so far I have not found anything about deleting them and hence I am doubting if it is even possible to delete constructors or operators using inheritance.
Any help would be greatly appreciated.
| How to delete constructors and operators using inheritance in c++? | Special member functions delegate to the corresponding functions of their subobjects (bases and members) and will be defaulted as deleted if that delegation finds a deleted or inaccessible function.
https://timsong-cpp.github.io/cppwp/n4868/class.ctor#class.copy.ctor-10
https://timsong-cpp.github.io/cppwp/n4868/class.copy.assign#7
Although you could use a non-static member for this, using a base class is better because it enables the "Empty Base Optimization", not costing anything.
struct no_copy
{
protected:
no_copy() = default;
no_copy(const no_copy&) = delete;
no_copy& operator=(const no_copy&) = delete;
};
class important_class : no_copy {};
pre-C++11 version:
class no_copy // members default to "private"
{
no_copy(const no_copy&); // no definition needed
no_copy& operator=(const no_copy&); // no definition needed
protected:
no_copy() {}
};
This won't stop the derived class from defining a user-provided special member that explicitly avoids calling the base class version, but if you do that it shows intent to make the derived class actually be copyable/whatever after all.
|
76384393 | 76384692 | I have a simple controller, in this controller I have this endpoint
@Post('/temp')
async asdf(
@Body() form: Record<string, string>,
@Res({ passthrough: true }) response: Response,
) {
this.logger.debug(JSON.stringify(form));
await response.json({ ok: true, form: JSON.stringify(form) });
}
When I try to POST some form data on it, using cURL or the browser, the object form is empty.
Example:
curl -X POST http://localhost:4000/mycontroller/temp -H "Content-Type: application/x-www-form-urlencoded" -d "param1=value1¶m2=value2"
Results in
{"ok":true,"form":"{}"}
Other controllers work; I can't see any difference between my controller and the endpoint to others.
What I'm doing wrong or missing?
| Nest.js empty Body for form data | If you're using form data you need to implement a form data parser, like busboy or multer. Nest integrates with multer and express already via the FileInterceptor and its variants. This will force multer to parse the request. If you don't use any files, just the form data format, I believe there is a NoFileInterceptor or similar.
Looks like there is no NoFileInterceptor. You could use AnyFileInterceptor instead and ignore the req.files, just be aware it could end up having your server taken down if a really nasty set of files comes in for multer to parse
|
76381056 | 76381247 | I am trying to make a graph from a list of tuples stored in a variable. I found G.add_edges_from(e) for making graph using list of tuples. but the problem is that this does not work and when i try to for example print the graph it returns None. I appreciate answers that solve my problem. I use the code below to make the graph:
import networkx as nx
e = [(1,2),(1,3),(2,3)]
G = nx.Graph()
g1 = G.add_edges_from(e)
print(g1)
Update:
I testes this code but again give None when trying to print:
e = [[(1,2),(1,3),(2,3)],[(10,20),(10,30),(20,30)]]
graph_list = []
for i in e:
graph_list.append(nx.Graph().add_edges_from(i))
print(graph_list[0].nodes)
| How to construct a graph using a list of tuples in python in networkX? | Let's break it down shall we?
You assigned a list of edges in e, then you made a graph with G.
However, your issue is you're trying to assign g1 to what the method add_edges_from returns (which is None).
What you actually want is something like this:
import networkx as nx
e = [(1,2),(1,3),(2,3)]
G = nx.Graph()
G.add_edges_from(e)
print(G)
Since the add_edges_from method returns None it is working as intended, you should try printing your original graph instead. I hope this helps and clarifies things for you!
Edit:
If you insist on just using the list of tuples, you can just do away with variables. Use lists to store graph objects instead, keep storing them in a loop as such:
e = [[(1,2),(1,3),(2,3)],[(10,20),(10,30),(20,30)]]
graph_list = []
for i in e:
G = nx.Graph()
G.add_edges_from(i)
graph_list.append(G)
print(graph_list[0].nodes)
print(graph_list[1].nodes)
Then you can use indices to get each specific graph you make (which would be stored separately in the list)
Or you might want to start dumping each graph in a json file ([answered here] (Method to export networkx graph to json graph file?))
Which can solve your RAM issue
|
76381127 | 76381250 | // msvc 17.3.5
// sizeof (long) : 4
#include <stdlib.h>
int main(){
long i = 0xAABBCCDD;// 0x AABBCCDD in debugger window
char number[] = "AABBCCDD EEFF";
char* p;
long var = strtol ( number, &p, 16 );// 0x 7FFF FFFF
perror ( "?:" ); // ?:: Result too large
}
I tried char number[] = "AABBCC EEFF"; and it works fine.
I expect 0xAABBCCDD inside var instead 0x7fffffff.
What is wrong?
| Why is strtol() returning 0x7fffffff instead of the expected 0xAABBCCDD? | From the C Standard (7.22.1.4 The strtol, strtoll, strtoul, and strtoull functions)
8 The strtol, strtoll, strtoul, and strtoull functions
return the converted value, if any. If no conversion could be
performed, zero is returned. If the correct value is outside the range
of representable values, LONG_MIN, LONG_MAX, LLONG_MIN,
LLONG_MAX, ULONG_MAX, or ULLONG_MAX is returned (according to the return type and sign of the value, if any), and the value of
the macro ERANGE is stored in errno.
The positive hexadecimal constant 0xAABBCCDD can not be represented in an object of the signed type long int provided that sizeof( long int ) is equal to 4.
For example try this demonstration program
#include <stddef.h>
#include <stdio.h>
int main( void )
{
printf( "%#X\n", LONG_MAX );
}
The program output is
0X7FFFFFFF
Note: as in this case sizeof( long ) is equal to sizeof( unsigned int ) and the value is representable in an object of the type unsigned int there is used the conversion specifier X. Otherwise you need to include header <inttypes.h> and to use a macro as for example PRIX32.
As you can see LONG_MAX (the maximum positive value that can be stored in an object of the type long int) is less than the positive hexadecimal constant 0xAABBCCDD.
Instead of using the function strtol use function strtoul
unsigned long var = strtoul ( number, &p, 16 );
Or if you want to deal with signed integers then use function strtoll.
|
76383877 | 76384741 | I have defined many pip packages in a requirements.txt, but I have not define the "futures" package:
...
future == 0.18.3
six == 1.16.0
joblib == 1.2.0
...
And then download all packages with the following command on Ubuntu 22.04:
pip3.9 download -r "/home/requirements.txt"
The above command exited with the following error:
...
...
Collecting widgetsnbextension~=4.0.7
Downloading widgetsnbextension-4.0.7-py3-none-any.whl (2.1 MB)
ββββββββββββββββββββββββββββββββββββββββ 2.1/2.1 MB 3.9 MB/s eta 0:00:00
Collecting branca>=0.5.0
Downloading branca-0.6.0-py3-none-any.whl (24 kB)
Collecting traittypes<3,>=0.2.1
Downloading traittypes-0.2.1-py2.py3-none-any.whl (8.6 kB)
Collecting xyzservices>=2021.8.1
Downloading xyzservices-2023.5.0-py3-none-any.whl (56 kB)
ββββββββββββββββββββββββββββββββββββββββ 56.5/56.5 KB 1.3 MB/s eta 0:00:00
Collecting futures
Downloading futures-3.0.5.tar.gz (25 kB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
Γ python setup.py egg_info did not run successfully.
β exit code: 1
β°β> [25 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 14, in <module>
File "/python39/lib/python3.9/site-packages/setuptools/__init__.py", line 18, in <module>
from setuptools.dist import Distribution
File "/python39/lib/python3.9/site-packages/setuptools/dist.py", line 32, in <module>
from setuptools.extern.more_itertools import unique_everseen
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 666, in _load_unlocked
File "<frozen importlib._bootstrap>", line 565, in module_from_spec
File "/python39/lib/python3.9/site-packages/setuptools/extern/__init__.py", line 52, in create_module
return self.load_module(spec.name)
File "/python39/lib/python3.9/site-packages/setuptools/extern/__init__.py", line 37, in load_module
__import__(extant)
File "/python39/lib/python3.9/site-packages/setuptools/_vendor/more_itertools/__init__.py", line 1, in <module>
from .more import * # noqa
File "/python39/lib/python3.9/site-packages/setuptools/_vendor/more_itertools/more.py", line 5, in <module>
from concurrent.futures import ThreadPoolExecutor
File "/tmp/pip-download-jelw4tc2/futures/concurrent/futures/__init__.py", line 8, in <module>
from concurrent.futures._base import (FIRST_COMPLETED,
File "/tmp/pip-download-jelw4tc2/futures/concurrent/futures/_base.py", line 357
raise type(self._exception), self._exception, self._traceback
^
SyntaxError: invalid syntax
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
Γ Encountered error while generating package metadata.
β°β> futures
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
How to find out which package depends on the "futures" from the "requirements.txt"?
Here is the dummy code:
# find_out_depends --requirement-file "/home/requirements.txt" --find-depends "futures"
Is there any "find_out_depends" command for accepting requirements.txt as argument and then print out the whole dependencies tree?
| How to find out which package depends on "futures" in requirements.txt | Create a fresh Python 3.9 venv and install your requirements without dependencies:
python3.9 -m pip install --no-deps requirements.txt
Then run the pip check CLI:
python3.9 -m pip check
It will complain that some package(s) have unmet dependencies, and you should find futures somewhere in there. Not to be confused with future, which is cross-compat.
|
76384647 | 76384769 | I have about a specific section of my code. The loop inputs semester files, computes new columns and outputs a data set with the new variables. The loop works beautifully, however making the Acad_Year variable is stagnant, I am looking for a way to make it more flexible so that I won't need to go in and re-write the case_when statement every time there is a new dataset. Sample data is available. Thank you in advance!
{r setup}
require("knitr")
setwd("~/Downloads/Stack Overflow/")
library(dplyr)
library(tidyr)
library(writexl)
PhGrad <- rbind(PhGrad_08, PhGrad_SP_23) %>%
filter(!BannerID== "")
d <- tibble(
filename = list.files(),
Sem = gsub(".*(Fall|Spring|Summer).*", "//1", filename),
Year = gsub(".*(//d{2}).*", "//1", filename),
grp = gsub(".*(ASPH|ID).*", "//1", filename)) %>%
pivot_wider(names_from = "grp", values_from="filename")
res <- vector(mode="list", length=nrow(d))
names(res) <- paste(d$Sem, d$Year, sep="_")
for(i in seq_along(res)){
ASPH <- rio::import(d$ASPH[i])
ID <- rio::import(d$ID[i])
res[[i]] <- bind_rows(ASPH, ID) %>%
distinct(ID, Program, .keep_all = T) %>%
rowwise() %>%
mutate(racecount= sum(c_across(`Race-Am Ind`:`Race- Caucasian`)== "Y", na.rm=T)) %>%
ungroup() %>%
mutate(racecode= case_when(Citizenship %in% list("NN", "NV") ~ "foreign_national",
`Race- Hispanic`== "Y" ~ "hispanic_latino",
racecount >1 ~ "two_or_more_races",
`Race-Am Ind`== "Y" ~ "american_indian_alaskan_native",
`Race- Asian`== "Y" ~ "asian",
`Race-Afr Amer`== "Y" ~ "black_african_american",
`Race- Hawaiian` == "Y" ~ "native_hawaiian_pacific_islander",
`Race- Caucasian`== "Y" ~ "white",
`Race-Not Rept`== "Y" ~ "race_unknown",
TRUE~ "race_unknown"),
gender_long= case_when(Gender== "F"~ "Female",
Gender== "M"~ "Male",
Gender== "N"~ "Other",
TRUE~ "other"),
DEPT= case_when(Program %in% list("3GPH363AMS", "3GPH363AMSP", "3GPH378AMCD", "3GPH378AMS", "3GPH379APHD")~ "COMD",
Program %in% list("3GPH593AMPH", "3GPH593AMS", "3GPH593APHD", "3GPH569ACGS")~ "ENHS",
Program %in% list("3GPH596AMS", "3GPH596AMSPH", "3GPH596APHD","3GPH594AMPH", "3GPH594AMS", "3GPH594AMSPH", "3GPH594APHD", "3GPH586APBAC")~ "EPID/BIOS",
Program %in% list("3GPH331AMS","3GPH331APHD","3GPH334AMS","3GPH335ADPT", "3GPH377AMS", "3GPH388AMS", "3GPH588AMPH", "3GPHJ331MS", "3UPH331ABS")~ "EXSC",
Program %in% list("3GPH568APBAC","3GPH592ACGS","3GPH592AMPH", "3GPH592APHD", "3GPH576ACGS", "3GPH121ACGS", "3GID635ACGS")~ "HPEB",
Program %in% list("3GPH591AMPH", "3GPH591APHD", "3GPH597AMHA","3GPH591ADPH")~ "HSPM",
TRUE~ "Missing"),
degree_delivery_type= case_when(`First Concentration`== "R999" | `Second Concentration`== "R999" ~ "Distance-based",
`First Concentration`== "3853" | `Second Concentration`== "3853" ~ "Executive",
TRUE~ "Campus-based"),
# FTE_compute= case_when(Level== "GR" & `Course Hours`<9 ~ round(`Course Hours`/9, #digits=2),
# Level== "GR" & `Course Hours`>=9~ 1,
# Level== "UG" & `Course Hours`<12~ round(`Course Hours`/12,
#digits=2),
# Level== "UG" & `Course Hours`>=12 ~ 1),
# Full_Part_Status=case_when((Level== "GR" & `Course Hours` <9)| (Level== "UG" &
#`Course Hours`<12)~"parttime_status",
# (Level=="GR" & `Course Hours`>=9)|(Level== "UG" & `Course
#Hours`>=12)~"fulltime_status",
# TRUE~ "other"),
Sem_Year= paste0(d$Sem[i],"_",d$Year[i]),
StudentCount= 1,
Acad_Year= case_when(Sem_Year %in% list("Fall_18", "Spring_19", "Summer_19")~ "AY2018-19",
Sem_Year %in% list("Fall_19", "Spring_20", "Summer_20")~ "AY2019-20",
Sem_Year %in% list("Fall_20", "Spring_21", "Summer_21")~ "AY2020-21",
Sem_Year %in% list("Fall_21", "Spring_22", "Summer_22")~ "AY2021-22",
Sem_Year %in% list("Fall_22", "Spring_23")~ "AY2022-23"),
Deg_group = case_when(Degree %in% list("DPT", "PHD", "DPH")~ "Doctorate",
Degree %in% list("MSP", "MCD", "MPH", "MHA", "MS","MSPH")~ "Masters",
Degree %in% list("CGS", "PBACC")~ "Certificate")) %>%
left_join(., PhGrad %>% mutate_at(vars(BannerID), ~as.character(.)), by= c("ID"="BannerID", "DEPT"), unmatched= "drop", relationship= "many-to-many") %>%
mutate(New_Deg= case_when(is.na(Degree.y)== T~ Degree.x,
is.na(Degree.y)== F~ Degree.y,
TRUE~ "Error")) %>%
select(-c(ApplicationID:StudentStatus))
}
| Grouping Semesters into Academic Years generalization | library(dplyr)
data.frame(Sem_Year = c("Fall_21", "Spring_22", "Summer_22",
"Fall_31", "Spring_32", "Summer_32")) %>%
tidyr::separate(Sem_Year, c("Sem","Yr"), convert = TRUE, remove = FALSE) %>%
mutate(AY_end = Yr + if_else(Sem == "Fall", 1, 0),
Acad_Year = paste0("AY20", AY_end - 1, "-", AY_end)) %>%
select(-c(Sem, Yr, AY_end))
Result (Reminder: update in 2099)
Sem_Year Acad_Year
1 Fall_21 AY2021-22
2 Spring_22 AY2021-22
3 Summer_22 AY2021-22
4 Fall_31 AY2031-32
5 Spring_32 AY2031-32
6 Summer_32 AY2031-32
|
76381002 | 76381254 | func main() {
m := map[string]int{
"foo": 42,
"bar": 1337,
}
go func() {
time.Sleep(1 * time.Second)
tmp := map[string]int{
"foo": 44,
"bar": 1339,
}
m = tmp
}()
for {
val := m["foo"]
fmt.Println(val)
}
}
I saw this in many packages.
Why this is not considered as a race condition?
go run -race . giving no error.
| Is this a race condition in go | As pointed out by @Volker, this is a data race. And since there is only one write, it's hard to be detected. Here is a modified demo to make it easy to trigger the data race error:
package main
import (
"fmt"
"time"
)
func main() {
m := map[string]int{
"foo": 42,
"bar": 1337,
}
done := make(chan any)
go func() {
for i := 0; i < 100; i++ {
time.Sleep(time.Microsecond)
tmp := map[string]int{
"foo": 44,
"bar": 1339,
}
m = tmp
}
close(done)
}()
for {
select {
case <-done:
return
default:
val := m["foo"]
fmt.Println(val)
}
}
}
|
76382398 | 76382590 |
ID
name
isSearchable
1
foo
true
2
bar
true
3
zar
false
I've got some ids and I need to filter records where they have isSearchable = true.
This query give as result ID = 1 because is searchable, but I would to apply the filter isSearchable to the entire result, not row-by-row.
SELECT *
FROM my_table
WHERE id IN (1, 3)
AND isSearchable = true
So in this case I'm expecting no-results because both records should be in first isSearchable and after that, filter the ids.
I've tried experimenting with sub-query etc but the in operator (or the or operator) but I'm not able to accomplish the result.
Maybe is something really simple, but I've no ideas on how to solve.
Thanks for your help.
| Condition to filter records with "in" and "and" operators | One approach using a window function:
SELECT ID
FROM (SELECT ID,
MIN(isSearchable::INT) OVER() AS minSearchable
FROM my_table
WHERE id IN (1,3)) cte
WHERE minSearchable = 1
Check the demo here.
|
76382293 | 76382615 | I am working on a bash script snippet for the sheller extension where I need to convert a Bash option string entered by the user into a valid Unix environment variable name. For example, the user may enter an option string like "-my-option-name" or "--another_option=", and I need to transform it into a valid environment variable name like "MY_OPTION_NAME" or "ANOTHER_OPTION".
To clarify my requirements and provide a clear understanding of the desired transformation, I have created a JavaScript example on JSFiddle. You can find it here. The example showcases a table of different input strings and their expected output after the transformation.
The javascript function handling the transformation used in the fiddle.
function stripCapitalizeAndSnake(inputString) {
const regex = /^-+(.*?)(?:=)?$/;
const result = inputString.replace(regex, (_, selectedString) => {
const transformedString = selectedString.replace(/-/g, '_');
return transformedString.toUpperCase();
});
return result;
}
I am seeking guidance or preferably code examples on how to implement this transformation using snippet transforms in Visual Studio Code snippets.
Thank you in advance for your assistance, especially if you have experience or knowledge about the vscode, code snippets and transformations!
Here is a simplified version of a snippet, which I hope someone can modify for the expected result.
Current snippet
{
"OPTION TO ENVIRONMENT": {
"prefix": "option to environment",
"body": [
"#!/usr/bin/env bash",
"",
"OPTION=\"${1:--quiet-mode}\"",
"",
"echo \"Option :\\${OPTION}\"",
"echo \"Variable name :${1/^(\\-+)([^=]+)(=)?$/${2:/upcase}/}\""
],
"description": "Convert bash option to a valid shell environment variable"
}
}
Current Result
When triggering the snippet, and leaving the default value as is the Variable name is correct except for the fact that "-" is not replaced with "_" which is my problem.
"QUIET-MODE" needs to be transformed to "QUIET_MODE".
#!/usr/bin/env bash
OPTION="--quiet-mode"
echo "Option :${OPTION}"
echo "Variable name :QUIET-MODE"
I have been trying to do this for a while now, so I ask you, do you know how to do this?
Ps.
Do you know of a better documentation about transformations, other than is mentioned here?
| Converting Bash Option String to Unix Environment Variable Name using vscode snippet Transformation | use the extension Hypersnips
you can add JavaScript to the snippet
It can be done with VSC standard snippets
capture possible starting - characters, ignore in result
capture all till - or =, UPCASE that group
capture possible -, substitute with _
capture possible =, ignore in result
apply these rules global/repeatedly
{
"OPTION TO ENVIRONMENT": {
"prefix": "option to environment",
"body": [
"#!/usr/bin/env bash",
"",
"OPTION=\"${1:--quiet-mode}\"",
"",
"echo \"Option :\\${OPTION}\"",
"echo \"Variable name :${1/(^-+)?([^-=]+)(-?)(=?)/${2:/upcase}${3:+_}/g}\""
],
"description": "Convert bash option to a valid shell environment variable"
}
|
76381019 | 76381291 | Merge list with another list of map in terraform
We have a list listA and a list of map, mapA as below
listA = ["cluster-0","cluster-1"]
mapA = [
{
auto_upgrade = false
disk_size_gb = 100
disk_type = "pd-standard"
node_pool_labels = {
agentpool = "np-1"
}
},
{
auto_upgrade = false
disk_size_gb = 50
disk_type = "pd-balanced"
node_pool_labels = {
agentpool = "np-2"
}
},
{
auto_upgrade = false
disk_size_gb = 100
disk_type = "pd-standard"
node_pool_labels = {
agentpool = "np-3"
}
}
]
I am trying to create a new list which should look like
listB = [
"cluster-0" = [{
auto_upgrade = false
disk_size_gb = 100
disk_type = "pd-standard"
node_pool_labels = {
agentpool = "np-1"
}
},
{
auto_upgrade = false
disk_size_gb = 50
disk_type = "pd-balanced"
node_pool_labels = {
agentpool = "np-2"
}
},
{
auto_upgrade = false
disk_size_gb = 100
disk_type = "pd-standard"
node_pool_labels = {
agentpool = "np-3"
}
}],
"cluster-1"= [{
auto_upgrade = false
disk_size_gb = 100
disk_type = "pd-standard"
node_pool_labels = {
agentpool = "np-1"
}
},
{
auto_upgrade = false
disk_size_gb = 50
disk_type = "pd-balanced"
node_pool_labels = {
agentpool = "np-2"
}
},
{
auto_upgrade = false
disk_size_gb = 100
disk_type = "pd-standard"
node_pool_labels = {
agentpool = "np-3"
}
}]
]
I have tried zipmap which works when listA has two elements and mapA has got two elements like only np-1, np-2 but fails when we add np-3. Trying to make this dynamic listB.
| How can I create a dynamic list in Terraform that combines a list of strings with a list of maps? | You can use:
listB = [{for idx, value in local.listA: value => local.mapA }]
so iterate over listA to create a new list where the elements are a dict of which the key is the original element in listA, and the value mapA.
|
76382245 | 76382626 | I am trying to create a small program in C which will poll a frame of data over a opc/ua connection using the open62541 library and the forward it to a kafka server.
Everything works fine when fetching the values from the nodes separately but I would like to use a UA_ReadRequest for that. The problem is that I am only receiving empty responses.
The opc/ua server is coded with in python using the freeopc package.
This is the function that tries tu use a UA_ReadResponse to fetch a several values for specified nodeIDs:
void retrieveOPCData(void)
{
UA_ReadRequest request;
UA_ReadRequest_init(&request);
UA_ReadValueId ids[nodeCount];
for (int i = 0; i < nodeCount; i++)
{
UA_ReadValueId_init(&ids[i]);
ids[i].attributeId = UA_ATTRIBUTEID_VALUE;
ids[i].nodeId = nodesToRead[i];
}
request.nodesToRead = ids;
for (int i = 0; i < nodeCount; i++)
{
UA_LOG_INFO(UA_Log_Stdout, UA_LOGCATEGORY_USERLAND, "ID%i: %s, %i", i,
request.nodesToRead[i].nodeId.identifier.string.data,
request.nodesToRead[i].nodeId.namespaceIndex);
}
UA_ReadResponse response = UA_Client_Service_read(client, request);
UA_LOG_INFO(UA_Log_Stdout, UA_LOGCATEGORY_USERLAND, "Status: %i",
response.responseHeader.serviceResult);
UA_LOG_INFO(UA_Log_Stdout, UA_LOGCATEGORY_USERLAND, "Responses: %li", response.resultsSize);
}
The result value is UA_STATUSCODE_GOOD but the number of responses is 0. It works fine when fetching the values one after the other like this:
void readNodeAtIndex(int index)
{
if (index >= nodeCount)
{
UA_LOG_INFO(UA_Log_Stdout, UA_LOGCATEGORY_USERLAND, "Index out of Range");
return;
}
UA_Variant variant;
UA_Variant_init(&variant);
const UA_NodeId nodeId = nodesToRead[index];
UA_StatusCode retval = UA_Client_readValueAttribute(client, nodeId, &variant);
if (retval == UA_STATUSCODE_GOOD && UA_Variant_hasScalarType(&variant,
&UA_TYPES[UA_TYPES_DOUBLE]))
{
UA_Double value = *(UA_Double*)variant.data;
UA_LOG_INFO(UA_Log_Stdout, UA_LOGCATEGORY_USERLAND, "Double-Value: %f", value);
}
else if (retval == UA_STATUSCODE_GOOD && UA_Variant_hasScalarType(&variant,
&UA_TYPES[UA_TYPES_BOOLEAN]))
{
UA_Boolean value = *(UA_Boolean*)variant.data;
UA_LOG_INFO(UA_Log_Stdout, UA_LOGCATEGORY_USERLAND, "Boolean-Value: %i", value);
}
UA_Variant_clear(&variant);
}
The opc/ua server is setup like this:
server = Server()
space_url = "opc.tcp://localhost:61032"
server.set_endpoint(space_url)
server.set_security_policy([ua.SecurityPolicyType.NoSecurity])
node = server.get_objects_node()
| UA_Client_Service_read only yields empty responses | You need to set nodesToReadSize:
UA_ReadRequest request;
UA_ReadRequest_init(&request);
UA_ReadValueId ids[nodeCount];
for (int i = 0; i < nodeCount; i++)
{
UA_ReadValueId_init(&ids[i]);
ids[i].attributeId = UA_ATTRIBUTEID_VALUE;
ids[i].nodeId = nodesToRead[i];
}
request.nodesToReadSize = nodeCount;
request.nodesToRead = ids;
|
76384502 | 76384779 | Hi I tried creating guards for my routes but I am getting this error:
[Error] Error: [GuardedRoute] is not a <Route> component. All component children of <Routes> must be a <Route> or <React.Fragment>
This is my code:
GuardedRoute.tsx-
import { Route, Navigate } from 'react-router-dom';
import { useUserInfo } from '../context/UserContext';
export const GuardedRoute = ({ path, element: Element, ...rest }: any) => {
const { userInfo } = useUserInfo();
if (userInfo.id !== '') {
return <Route path={path} element={Element} {...rest} />;
} else {
return <Navigate to="/" replace />;
}
};
App.tsx
<BrowserRouter>
<Nav></Nav>
<div className="w-[100vw] fixed top-[50px] z-[100] flex flex-col">
{
ErrorMessages.map((message,index)=>(
<div className="w-[300px] h-[50px] mt-[10px] mx-auto bg-red-600/90 rounded-md grid place-content-center" key={index}>
<p className="font-bold text-gray-200 text-center">{message}</p>
</div>
))
}
</div>
<Routes>
<Route path="/" element={<Home />} />
<GuardedRoute path="/profile/:select?" element={<Profile />} />
<GuardedRoute path="/checkout" element={<CheckOut />}/>
<Route path="/register" element={<Register />} />
<Route path="/login" element={<Login />} />
<Route path="/browse" element={<Browse />} />
<Route path="/product/:id" element={<Product />} />
<Route path="/404" element={<PageNotFound />}/>
<Route path="*" element={<Navigate to="/404" />}/>
</Routes>
</BrowserRouter>
Does anyone have a solution for this error? thanks.
| React Ts trying to create guarded routed but getting an error | It is better to put all the private routes inside Authguard instead of providing Authguard to each component. you can customize the auth, like if there is cookie exists of particular user, then auth is true, otherwise false.
import { Outlet, Navigate } from 'react-router-dom';
const Authguard = () => {
let auth = false;
return(
auth ? <Outlet/> : <Navigate to="/"/>
)
}
export default Authguard
Now you Routes will be like, e.g in App.tsx
function App(){
return(
<Routes>
<Route path="/" element={<Login />} />
<Route element={<Authguard />}>
<Route path="/register" element={<Register />} />
<Route path="/home" element={<Home />} />
</Route>
</Routes>)
}
This shows your Register and Home components are private, and Login component is public.
|
76381166 | 76381309 | So when enemy is spawned i try to init him here enemy.Init(); but had error Object reference not set to an instance of an object. But here Debug.Log(enemy._enemyHealth); i get number of enemy health. How i can Init my enemy from here if enemy has class 'Usual Enemy' and it extends from this class 'Enemy' and i wiil have more enemy child cllases?
I vave class Enemy
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.AI;
public abstract class Enemy : MonoBehaviour
{
private GameObject _player;
private Animator _animator;
private Rigidbody _rigidbody;
private bool _isSpawned;
private const int _enemySpavnTime = 5;
[SerializeField] public float _enemyHealth;
[SerializeField] protected float _rotationSpeed = 10f;
[SerializeField] protected float _moveSpeed;
[SerializeField] protected float _damage;
[SerializeField] protected DamageBox _damageBox;
[SerializeField] protected TriggerBox _triggerBox;
private EnemyCondition _enemyCondition = EnemyCondition.Dead;
private enum EnemyCondition
{
Spawn,
Run,
Attack,
Dead
}
void Start()
{
_rigidbody = GetComponent<Rigidbody>();
_animator = GetComponent<Animator>();
_player = GameObject.FindWithTag("Hero");
_damageBox.HeroHited += enemyAttack;
_triggerBox.HeroInAttackRange += enemyGetAngry;
_triggerBox.HeroOutAttackRange += enemyGetChill;
}
void Update()
{
if (_player != null)
{
if (_enemyCondition == EnemyCondition.Run)
{
EnemyPlayerRotation(_player);
} else if (_enemyCondition == EnemyCondition.Attack)
{
EnemyPlayerRotation(_player);
}
}
}
void FixedUpdate() {
if (_enemyCondition == EnemyCondition.Run)
{
EnemyPlayerFollow();
}
}
public virtual void Init() {
enemySpawn();
}
protected virtual void EnemyPlayerFollow() {
Vector3 direction = transform.TransformDirection(new Vector3(0, 0, 1));
_rigidbody.velocity = direction * _moveSpeed;
}
protected virtual void EnemyPlayerRotation(GameObject target) {
Vector3 direction = (target.transform.position - transform.position).normalized;
Quaternion lookRotation = Quaternion.LookRotation(new Vector3(direction.x, 0, direction.z));
transform.rotation = Quaternion.Slerp(transform.rotation, lookRotation, Time.deltaTime * _rotationSpeed);
}
public virtual void EnemyDamage(float damage)
{
_enemyHealth -= damage;
if (_enemyHealth <= 0) {
PoolManager.Instanse.Despawn(gameObject);
}
}
public virtual void enemySpawned() {
_enemyCondition = EnemyCondition.Run;
_rigidbody.isKinematic = false;
_animator.SetTrigger("Run");
StopCoroutine(SpawnMoveUp());
Debug.Log("enemySpawned");
}
protected virtual void enemyAttack(Hero trigger)
{
if (_enemyCondition == EnemyCondition.Attack) {
trigger.HeroDamage(_damage);
}
}
protected virtual void enemyGetAngry() {
_enemyCondition = EnemyCondition.Attack;
_animator.SetTrigger("Attack");
}
protected virtual void enemyGetChill() {
_enemyCondition = EnemyCondition.Run;
_animator.SetTrigger("Run");
}
protected virtual void enemySpawn() {
_animator.SetTrigger("Spawn");
_animator.speed = 0.0f;
StartCoroutine(SpawnMoveUp());
}
IEnumerator SpawnMoveUp()
{
float spawnStage = 0.0f;
Vector3 startPoint = transform.position;
while(spawnStage <= 1.0f) {
transform.position = Vector3.Slerp(startPoint, new Vector3(transform.position.x, 0, transform.position.z), spawnStage);
spawnStage += Time.deltaTime/_enemySpavnTime;
yield return null;
}
_animator.speed = 1.0f;
}
}
And I have class that spawns enemies
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class EnemiesSpawner : MonoBehaviour
{
[SerializeField] private Levels[] _levelsList;
private List<GameObject> _enemiesQueue = new List<GameObject>();
private List<GameObject> _levelEnemiesList = new List<GameObject>();
private List<int> _levelEnemiesCount = new List<int>();
private bool _isBossSpawned = false;
// private int _currentLevel = PlayerPrefs.GetInt("currentLevel") - 1;
private int _currentLevelNumber = 0;
private float _reloadPause = 0;
private int currentEnemiesListPos = 0;
private int allEnemiesCount = 0;
private float _reloadTime;
private float _spawnRadius;
private float _spawnOfsetRadius = 2f;
[SerializeField] private GameObject _boss;
[SerializeField] private GameObject _spawnPoint;
void Start() {
Levels currentLevelObj = _levelsList[_currentLevelNumber];
_levelEnemiesCount.AddRange(currentLevelObj.enemiesCount.ToArray());
_levelEnemiesList = currentLevelObj.enemiesPrefabs;
_spawnRadius = currentLevelObj.spawnRadius;
_reloadTime = currentLevelObj.spawnReload;
foreach(int num in _levelEnemiesCount) {
allEnemiesCount += num;
}
while(_enemiesQueue.Count < allEnemiesCount){
int randomIndex = Random.Range(0, _levelEnemiesCount.Count);
if (_levelEnemiesCount[randomIndex] == 0) {
continue;
} else {
_enemiesQueue.Add(_levelEnemiesList[randomIndex]);
_levelEnemiesCount[randomIndex] --;
}
}
}
public void SetPlayerForSpawn (GameObject player) {
_spawnPoint = player;
}
void Update ()
{
if (_spawnPoint != null){
SpawnEnemies();
}
}
public void SpawnEnemies()
{
_reloadPause += Time.deltaTime;
if (_reloadPause >= _reloadTime && currentEnemiesListPos < allEnemiesCount) {
GameObject go = PoolManager.Instanse.Spawn(_enemiesQueue[currentEnemiesListPos],new Vector3((Random.value < 0.5f) ? Random.Range(_spawnPoint.transform.position.x + _spawnOfsetRadius, _spawnPoint.transform.position.x + _spawnRadius) : Random.Range(_spawnPoint.transform.position.x - _spawnOfsetRadius, _spawnPoint.transform.position.x - _spawnRadius), -5, (Random.value < 0.5f) ? Random.Range(_spawnPoint.transform.position.z + _spawnOfsetRadius, _spawnPoint.transform.position.z + _spawnRadius) : Random.Range(_spawnPoint.transform.position.z - _spawnOfsetRadius, _spawnPoint.transform.position.z - _spawnRadius)), Quaternion.identity);
Enemy enemy = go.GetComponent<Enemy>();
Debug.Log(enemy._enemyHealth);
enemy.Init();
_reloadPause = 0;
currentEnemiesListPos ++;
} else if (currentEnemiesListPos == allEnemiesCount && _isBossSpawned == true) {
Instantiate(_boss,new Vector3(0,0,0), Quaternion.identity);
_isBossSpawned = true;
}
}
}
So when enemy is spawned i try to init him here enemy.Init(); but had error Object reference not set to an instance of an object. But here Debug.Log(enemy._enemyHealth); i get number of enemy health. How i can Init my enemy from here if enemy has class 'Usual Enemy' and it extends from this class 'Enemy' and i wiil have more enemy child cllases?
| Object reference not set to an instance of an object, Unity | The problem is not about "enemy" object. It is not null. If you look at the last reference on your call stack, you see that it is "_animator" field.
It's probably because you call the "Init" function before the "enemy" object's "Start" method called by Unity engine.
|
76382285 | 76382653 | Referencing Problem when Java Class is used in Kotlin. There is the Java class Base32Decoder.java and code from this class is used in the Kotlin file hello.kt.
When I try to run Java code through a Kotlin file, an error occurs because of no reference could be established to the Java class Base32Decoder.
Error message:
hello.kt:4:25: error: unresolved reference: Base32Decoder
Base32Decoder Java class can't resolve the reference to it. Since this class is used inside the Kotlin file, the reference needs to work.
Code
fun main(args: Array<String>){
val Base32Decoder = Base32Decoder()
val rectangleArea: String = Base32Decoder.base32Decode("JBSWY3DPFQQFO33SNRSCC===")
println("inside the Kotlin codes:" + rectangleArea)
}
How can I reference Java classes when I want to use Java code in Kotlin files?
| Kotlin Error 'unresolved reference' appears when trying to run Java Code from a Kotlin file | The code must be accessible for Kotlin. This means you have to compile (If you compile code in terminal: javac SampleFile.java The Base32Decoder.java file to generate the Base32Decoder.class.
Now generate the JAR-file with a link to the Kotlin file in command line:
jar cf app.jar Base32Decoder.class hello.kt
Now since you use kotlinc you can execute the code in command line like this:
kotlinc -classpath app.jar -script hello.kt
Now your code should run fine. The problem is that Kotlin didn't have access to the Base32Decoder.java class.
|
76380938 | 76381339 | I am trying to generate a URL where users can access a file that is in a blob storage container on Azure. The blob storage container is heavily restricted by IP address, and then the service I'm building will manage requests and generate a URL where users can access the file directly. Here is the code used to generate the URL (I think the code itself is probably fine).
CloudBlobContainer container = BlobStorage.GetContainer(_accountStatementEmailConfig.BlobStorageContainerName);
CloudBlockBlob blob = container.GetBlockBlobReference(document.StoragePath);
if (!blob.Exists())
{
return NotFound();
}
TimeSpan sasExpiryTime = TimeSpan.FromMinutes(_apiConfig.PresignedURLExpiryInMinutes);
SharedAccessBlobPermissions permissions = SharedAccessBlobPermissions.Read;
string sasToken = blob.GetSharedAccessSignature(new SharedAccessBlobPolicy
{
Permissions = permissions,
SharedAccessExpiryTime = DateTime.UtcNow.Add(sasExpiryTime)
}, new SharedAccessBlobHeaders(), _apiConfig.SharedAccessSignaturePermissionsPolicyName);
string documentUrl = $"{blob.Uri.AbsoluteUri}{sasToken}";
The code generates the URL fine, but when a user goes to the URL they receive the following error:
<Error>
<Code>AuthorizationFailure</Code>
<Message>
This request is not authorized to perform this operation. RequestId:92d7ca35-501e-0016-2a65-973659000000 Time:2023-06-01T08:43:35.2439678Z
</Message>
</Error>
It was perhaps my incorrect assumption about SAS that the token would allow me to bypass the IP restrictions since I am generating the URL from a whitelisted IP. Am I taking the incorrect approach, or is there something minor I am overlooking?
| Azure blob storage returns Unauthorized with SAS generated URL | As Gaurav Mantri pointed out in comments, the issue is my understanding of how Azure handles permissions since all of my previous experience is from AWS.
In Azure the storage account and the containers have separate access levels. Can achieve similar functionality to AWS's private + presignedURL option by setting the Storage Account to public, and each of the individual containers to private. Then using the generated SAS URL to access the file.
|
76384686 | 76384798 | I have data frame as :
df <- data.frame( date =seq(from = as.Date("2000-01-01"),
to = as.Date("2005-01-01"),'month'))
df <- df %>% mutate(cumsum = seq(1, length.out = length(date)))
I want to create a new column, which is the sum of the value in cumsum and every 12th value (one year back).
EDIT:
I like both your answers! Actually I just found a problem for the solution for me (sorry my explanation was not quite clear.) Your approach gives me the sum of the value now and one year befor. But I do have seveal years and would need the cumsum of all overervation in previous years (so sum(x, lag(x,12), lag(x,24), lag (x,36)). I tried smth. like (rep(lag(cumsu, 12), nrow(df)/12). May you can help. Thanks!
| Lag of every nth element | The literal approach is to use lag, and if you are assured of perfectly-spaced data, then @Jamie's answer is the most direct and simplest approach.
However, if there is a chance that you don't have all intermediate months, this could lag incorrectly. One way to guard against this is to self-join with the previous date.
df2 <- df[-20,] # just to impose some missingness
library(lubridate) # %m+%
df2 %>%
mutate(
# this is the more direct route, but with missingness it glitches
rolling_12 = cumsum + lag(cumsum, n = 12),
lastyear = date %m+% years(-1)
) %>%
left_join(df2, by = c("lastyear" = "date"), suffix = c("", "_12")) %>%
mutate(cumsum_12 = cumsum + cumsum_12) %>%
select(-lastyear)
# date cumsum rolling_12 cumsum_12
# 1 2000-01-01 1 NA NA
# 2 2000-02-01 2 NA NA
# 3 2000-03-01 3 NA NA
# 4 2000-04-01 4 NA NA
# 5 2000-05-01 5 NA NA
# 6 2000-06-01 6 NA NA
# 7 2000-07-01 7 NA NA
# 8 2000-08-01 8 NA NA
# 9 2000-09-01 9 NA NA
# 10 2000-10-01 10 NA NA
# 11 2000-11-01 11 NA NA
# 12 2000-12-01 12 NA NA
# 13 2001-01-01 13 14 14
# 14 2001-02-01 14 16 16
# 15 2001-03-01 15 18 18
# 16 2001-04-01 16 20 20
# 17 2001-05-01 17 22 22
# 18 2001-06-01 18 24 24
# 19 2001-07-01 19 26 26
# 20 2001-09-01 21 29 30 <-- this is where rolling_12 goes wrong
# 21 2001-10-01 22 31 32
# 22 2001-11-01 23 33 34
# 23 2001-12-01 24 35 36
# 24 2002-01-01 25 37 38
# 25 2002-02-01 26 39 40
# 26 2002-03-01 27 41 42
# 27 2002-04-01 28 43 44
# 28 2002-05-01 29 45 46
# 29 2002-06-01 30 47 48
# 30 2002-07-01 31 49 50
# 31 2002-08-01 32 51 NA
# 32 2002-09-01 33 54 54
# 33 2002-10-01 34 56 56
# 34 2002-11-01 35 58 58
# 35 2002-12-01 36 60 60
# 36 2003-01-01 37 62 62
# 37 2003-02-01 38 64 64
# 38 2003-03-01 39 66 66
# 39 2003-04-01 40 68 68
# 40 2003-05-01 41 70 70
# 41 2003-06-01 42 72 72
# 42 2003-07-01 43 74 74
# 43 2003-08-01 44 76 76
# 44 2003-09-01 45 78 78
# 45 2003-10-01 46 80 80
# 46 2003-11-01 47 82 82
# 47 2003-12-01 48 84 84
# 48 2004-01-01 49 86 86
# 49 2004-02-01 50 88 88
# 50 2004-03-01 51 90 90
# 51 2004-04-01 52 92 92
# 52 2004-05-01 53 94 94
# 53 2004-06-01 54 96 96
# 54 2004-07-01 55 98 98
# 55 2004-08-01 56 100 100
# 56 2004-09-01 57 102 102
# 57 2004-10-01 58 104 104
# 58 2004-11-01 59 106 106
# 59 2004-12-01 60 108 108
# 60 2005-01-01 61 110 110
|
76382357 | 76382656 | I have a file <n>.x and <n-1>.y and I want to create a dependency in my Makefile.
Manually I can write each case as follows:
2.x : 1.y
3.x : 2.y
4.x : 3.y
...
Now I would like to be able to write this more generically:
%.x : <???>.y
Gnu Make doesn't seem to have integer arithmetic on its own and shell expansions ($$(( )), $(shell ) don't seem to work for prerequisites.
Is there a way do this in the Makefile itself or should I use an external script that can generate these dependencies and let the Makefile include them?
Edit: My specific case is dealing with files containing financial transactions. Each file (e.g. 2023.journal) represents transactions for a specific year and includes an opening statement (e.g. 2023_opening.journal), which is based on the closing statement of the year before that. This create a dependency between 2023_opening.journal and 2022.journal.
| How to do integer arithmetic in Makefile prerequisites? | With GNU make:
.SECONDEXPANSION:
%.x: $$(shell expr $$* + 1).y
All rules after the .SECONDEXPANSION special target have their list of prerequisites (and only that) processed twice by make: a first time, as everything else, when make parses the Makefile, plus a second time when make needs to check the prerequisites of a target. Contrary to the first phase, during the second phase the automatic variables are defined.
So, after the first phase the rule becomes:
%.x: $(shell expr $* + 1).y
$* is the make automatic variable that expands as the stem in pattern rules. In your case it is the part that matches the %.
And then, when make needs the list of prerequisites for 1.x, the rule is processed again, $(shell expr $* + 1).y is expanded, and becomes (step by step):
1.x: $(shell expr 1 + 1).y # $* --> 1
1.x: 2.y # $(shell expr 1 + 1) --> 2
|
76380520 | 76381361 | I am trying to add multiple cells, which are not always consecutive, in only one Name in Excel Name Manager using Python. You can see an example of what I want to do in the attached screenshot.
I have tried the Python libraries openpyxl and XlsxWriter, but both libraries can only define a specific cell or a specific range.
Examples
openpyxl
specific_cell = DefinedName('specific_cell', attr_text='Sheet1!$C$8')
specific_range = DefinedName('specific_range', attr_text='Sheet1!$C$8:$J$13')
XlsxWriter
workbook.define_name('specific_cell', '=Sheet1!$G$1')
workbook.define_name('specific_range', '=Sheet1!$G$1:$H$10')
Is there any way to add to Name Manager something more complicated than the above?
Based on the attached screenshot something like
workbook.define_name('complex_range','=Sheet1!$B$3:$E$8;Sheet1!$B$12:$C$16;Sheet1!$B$19;Sheet1!$H$12:$I$16')
| How can I add multiple non-consecutive cells to Name Manager in Excel using Python? | These will work for each module, the common factor being comma rather than semi-colon?
Xlsxwriter:
workbook.define_name("test", "=Sheet1!$B$3:$E$8,Sheet1!$B$12:$C$16,Sheet1!$H$12:$I$16,Sheet1!$B$19")
Xlwings:
workbook.names.add(name="test", refers_to="=Sheet1!$B$3:$E$8,Sheet1!$B$12:$C$16,Sheet1!$H$12:$I$16,Sheet1!$B$19")
Openpyxl:
workbook.defined_names.add(DefinedName("test", attr_text="Sheet1!$B$3:$E$8,Sheet1!$B$12:$C$16,Sheet1!$H$12:$I$16,Sheet1!$B$19"))
|
76382439 | 76382678 | i am trying to upload pdf to server using nodejs and multer but there is a problem.
a send a file from ejs templet and it must be saved at disk on folder named upload and file name changed to specific name .
but what happen that Multer middleware does not work no folder created no filename changed
There are parts of code .
ejs file
<form enctype="multipart/form-data">
<input type="text" placeholder="Book Name" id="name" />
<input type="text" placeholder="Author " id="author" />
<input type="text" placeholder="Buy link" id="link" />
<input type="text" placeholder="Book description" id="desc" />
<input type="file" name="pdf" id="pdf" placeholder="upload file" />
<button type="submit">Add</button>
</form>
<script>
// const multer = import("multer");
// const upload = multer({ dest: "./public/data/uploads/" });
let form = document.querySelector("form");
form.addEventListener("submit", async (e) => {
let bookName = document.getElementById("name").value;
let bookAuthor = document.getElementById("author").value;
let bookLink = document.getElementById("link").value;
let bookDesc = document.getElementById("desc").value;
let pdf = document.getElementById("myfile").files[0].name;
e.preventDefault();
try {
const res = await fetch("/addBooks", {
method: "POST",
body: JSON.stringify({
bookName,
bookAuthor,
bookDesc,
bookLink,
pdf,
}),
headers: { "Content-Type": "application/json" },
});
</script>
middleware:
onst storage = multer.diskStorage({
destination: function (req, file, cb) {
cb(null, "upload");
},
filename: function (req, file, cb) {
cb(null, Date.now() + "-" + file.originalname);
console.log(destination, filename);
},
});
const upload = multer({ storage });
route.post("/addBooks", upload.single("pdf"), addBook);
post func
let addBook = async (req, res) => {
console.log("reqbody >> ", req.body);
let { bookName, bookAuthor, bookDesc, bookLink, pdf } = req.body;
try {
let _book = await books.create({
name: bookName,
author: bookAuthor,
description: bookDesc,
buyLink: bookLink,
pdf:pdf,
});
if (_book) {
res.status(200).send({ msg: "success" });
}
} catch (error) {
logger.error("system crashed try again ");
res.status(400).send({ msg: "Wrong" });
}
};
| How to fix Multer middleware when it fails to create upload folder and change filename? | When sending a file, you must send it in the form, and the information comes in req.file, not in req.body
|
76381321 | 76381368 | My HTML :
<div id="PDF">
<input type="checkbox" id="pdf" name="pdf" value="pdf">
<label for="pdf">PDF</label>
</div>
My JS :
document.addEventListener('change',function(event){
if(event.target.id == "pdf"){
if(event.target.checked == true){
event.target.label.style.fontWeight = "bold";
}
else{
event.target.label.style.fontWeight = "normal";
}
}
});
To my utter dismay, when I execute the code, I am greeted with this error:
Uncaught TypeError: Cannot read properties of undefined (reading
'style')
How do I fix this issue?
| How do I make the label text of an element bold in javascript? | There are two main issues here. Firstly, the event target has no attribute of label. You should just select the label explicitly and change the styles. Secondly, I would recommend adding the event listener to only the #pdf div rather than the whole document. This way, you won't need to check for the ID and the event listener won't be fired on every change. For example:
document.getElementById("pdf").addEventListener("click", event => {
const label = document.querySelector("label[for=pdf]");
if(event.target.checked == true){
label.style.fontWeight = "bold";
}
else{
label.style.fontWeight = "normal";
}
})
|
76384755 | 76384803 | I have a simple query that is taking an input from a bind variable.
CREATE TABLE "FRUITS"
( "FRUIT_NAME" VARCHAR2(100),
"COLOR" VARCHAR2(100)
) ;
insert into fruits (fruit_name, color)
values ('Banana', 'Yellow')
insert into fruits (fruit_name, color)
values ('Lemon', '')
insert into fruits (fruit_name, color)
values ('Apple', 'Red')
SELECT * FROM FRUITS
WHERE
COLOR = case
when :P1_ITEM is null then null
else :P1_ITEM
end
If the input is 'Yellow' the result would be 'Banana' (when 'Red' then 'Apple'). However, if the input happens to be null the result is 'no data found'.
How can this be avoided knowing that null is not a null value?
If the input is null on color then how can I return the null color row? meaning 'Lemon' + null
Thanks
| Oracle SQL Case with Null | Something like this might be one option:
SELECT * FROM FRUITS
WHERE
nvl(COLOR, 'x') = case
when :P1_ITEM is null then 'x'
else :P1_ITEM
end;
|
76380594 | 76381374 | There was a local report in the $TMP that is "disappeared" somehow. I did not delete it, but can't see it in SE80 anymore.
What could be the reason?
Somebody else has deleted it. The system has been resetted somehow to an older backup state. Are any other reasons possible?
Is there a possibility to see in the traces what has happened or are there any other (better) tracking possibilities?
Unfortunately I don't know what was the exact name of the disappeared report, but I know the beginning of its name (like Z_ABCD_...)
| A local object disappeared in ABAP from $TMP package | You cannot restore or track local objects if they were deleted. There could be any reason from that you have mentioned. Better ask collegues / basic team if they made some changes.
Try to look up the table TADIR (Directory of Repository Objects) to check which local development objects exist. Use SE16 / SE16N transaction with condition DEVCLASS = $TMP, additionally AUTHOR to include only objects from specific user, OBJ_NAME = Z_ABCD* to restrict the program name.
You can also check the table REPOSRC (Report Source Code), where reports source code in RAWSTRING (DATA field) is stored. Filter on PROGNAME, CNAM (username) to check if the source sode is available on the system (there are also several views avaiable for this table, TRDIR, D010SINF).
If the program were assigned to a package / transport and deleted, than you could find it in TADIR with a deletion flag DELFLAG = X, and also in the table E071 with OBJFUNC = D in case of a transport assignment. Local objects just get deleted from the repository tables.
|
76382531 | 76382683 | I make a category included image scroll butI can't put my variable named tabBarItemName name into image, it keeps giving error like this : Cannot convert value of type 'Image' to expected argument type 'String'
struct ContentView: View {
@State var currentTab: Int = 0
var body: some View {
ZStack(alignment:.top) {
TabView(selection:self.$currentTab){
Image1View().tag(0)
Image2View().tag(1)
Image3View().tag(2)
}
.tabViewStyle(.page(indexDisplayMode: .never))
.edgesIgnoringSafeArea(.all)
.padding(.top,76)
CategoryScroll(currentTab: self.$currentTab)
}
}
}
struct CategoryScroll: View {
var tabBarOptions: [Image] = [Image("bim")]
@Binding var currentTab : Int
var body: some View {
ScrollView(.horizontal){
HStack(spacing:20){
ForEach(Array(zip(self.tabBarOptions.indices, self.tabBarOptions)),
id:\.0,
content: {
index, name in
CategoryView(currentTab: self.$currentTab, tab: index, tabBarItemName: name)
})
}
}.padding(7)
.background(Color.white)
}
struct CategoryView: View {
@Binding var currentTab: Int
var tab : Int
var tabBarItemName: Image
var body: some View{
Button{
self.currentTab = tab
} label: {
VStack{
Image(tabBarItemName) // I'm trying to make changes right here
.frame(width: 12, height: 12)
.padding(.leading, 65)
if currentTab == tab {
Color.init( red: 0.965, green: 0.224, blue: 0.49)
.frame(height: 2)
.padding(.leading,65)
} else {
Color.clear.frame(height: 6)
}
}
.animation(.spring(), value: self.currentTab)
}
.buttonStyle(.plain)
}
}
}
I would be very happy if you could help me with this.
| Cannot convert value of type 'Image' to expected argument type 'String' | Change
var tabBarOptions: [Image] = [Image("bim")]
to
var tabBarOptions: [String] = ["bim"]
And change
var tabBarItemName: Image
to
var tabBarItemName: String
The issue:
Image(tabBarItemName) is expecting a String, not an Image
|
76380711 | 76381383 | I have an express js app with some routes and a protected route that matches all the calls to an endpoint that starts with /app. after that i add generic error handler to the app like so:
app.use("/app", authenticate);
// app.post("/new_user", validate({body: schema}), route_handler);
// more routes etc.
app.use((error: TypeError | ValidationError, req: Request, res: Response, next: NextFunction) => {
console.log("CHECK FOR VALIDATION ERROR");
if(error instanceof ValidationError) {
console.log("VALIDATION ERROR FOUND")
res.status(403).send(error);
} else {
console.log("NO VALIDATION ERROR")
next();
}
});
app.use((error: TypeError | AuthError, req: Request, res: Response, next: NextFunction) => {
console.log("CHECK FOR AUTH ERROR");
if(error instanceof AuthError) {
console.log("AUTH ERROR FOUND");
res.status(403).send({msg: "Authentication failed"});
} else {
console.log("NO AUTH ERROR")
next();
}
});
app.use((error: Error, req: Request, res: Response, next: NextFunction) => {
console.log("CHECK GENERIC ERROR");
if(error) {
res.status(500).send({msg: "Some generic error happend"});
} else {
next();
}
});
app.use((req: Request, res: Response, next: NextFunction) => {
console.log("ROUTE NOT FOUND");
res.status(404).send({
msg: "this endpoint was not found",
});
next();
});
When i do a request to for example the following endpoint: https://localhost/app/ and throw a AuthError on purpose in this endpoint the only console.log()'s i ever see are:
CHECK FOR VALIDATION ERROR
NO VALIDATION ERROR
ROUTE NOT FOUND
But i think i should see the following:
CHECK FOR VALIDATION ERROR
NO VALIDATION ERROR
CHECK FOR AUTH ERROR
AUTH ERROR FOUND
Why is my Auth error middleware never called??
| express js error handler(s) get ignored after first one | You need to pass error to the other error middleware when you call next like this:
app.use((error: TypeError | AuthError, req: Request, res: Response, next: NextFunction) => {
console.log("CHECK FOR AUTH ERROR");
if(error instanceof AuthError) {
console.log("AUTH ERROR FOUND");
res.status(403).send({msg: "Authentication failed"});
} else {
console.log("NO AUTH ERROR")
next(error);
}
});
Without it, the normal middleware will get called.
|